text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Some Pragmatic Issues in the Planning of Definite and Indefinite Noun Phrases Douglas E. Appelt Artificial Intelligence Center, SRI International and Center for the Study of Language and Information Stanford University 1 Introduction In this paper we examine the pragmatic knowledge an utterance-planning system must have in order to produce certain kinds of definite and indefinite noun phrases. An utterance.planning system, like other planning systems, plans actions to satisfy an agent's goals, but allows some of the actions to consist of the utterance of sentences. This approach to language generation emphasizes the view of language as action, and hence assigns a critical role to prag- matics. The noun phrases under consideration in this paper are those that presuppose the existence of an individual that could be described by the description D. In other words, when a speaker uses a noun phrase with description P, it makes sense to ask the question "Which x is P? ~ This cri- terion includes more than strictly referential uses of noun phrases, because it is not necessary for the speaker or hearer to k'now what individual is described by D -- it is merely necessary that the existence of such an individual is pre- supposed. Consider the attributive description in sentence (l}: The runner who wins tomorrow's race will qualify (I) for the semifinals. The description "runner who wins tomorrow's race" cannot be referential, because, under ordinary circumstances, the speaker could not possibly know who it is that wouid fit the description. Nevertheless, it is still reasonable to ask which runner will win tomorrow's race, because the description is objectively true of some individual. This qualification excludes noun phrases whose referents are bound within the scope of a universal quantifier, such as "the woman ..." in (2) Every man wants to meet the woman of his dreams. For a similar reason, indefinites within the scope of a sen- tential negation axe excluded because they introduce an existential quantifier, which, under the scope of negation, is really a universal quantifier. Therefore, "a screwdriver" in (3) John does not have a screwdriver. is excluded because, under most circumstances of its use, there is no screwdriver that the description in sentence (3) denotes. Predicate nnminal~ are excluded, as in the sen- tence (4) John wants to be a doctor. because one would not ask the question "Which doctor does John want to be?* The choice of this particular class of noun phrases is mo- tivated by considerations relevant to planning. When a speaker communicates with a hearer, he often intends the hearer to hold some attitudes toward individuals in the do- main. This is particularly true in task-oriented dialogues where the hearer may have to locate and manipulate things in his environment. The theory of utterance planning assumed for the pur- pose of this analysis is the one embodied in KAMP lAp- pelt, 1985). Individuals are represented by terms in an intensional logic of knowledp~e and acti,m. A metalanguage is used to axiomatize the relationship that holds between the terms and the individuals they denote. The terms can consist of predicates combined with an iota operator, as in Lz D(z), where D{z) = D,(z) A...A D.(.c). The predicates O~ are called descriptor.9, and their conjunc- tion. D, is called a description. Because most noun phrases employ terms that are constructed from ;x description, often the words "term" and "description ~ aro ,,sed interchange- ably. The propositional content ~,f the spe;~ker'~ ~ltterance is represented by a sentence in the intensi~,nal [ogm involving the terms discussed above. Uttering a sentence entails per- forming a number of actions, called concept activation ac- tions, which result in the terms constituting the proposition receiving a special status called "active. " The proposition 198 that the speaker intends to convey is a predication involv- ing the active terms. Referring is a particular type of con- cept activation action with relatively strict conditions on what must be mutually believed by the speaker and hearer for the action to succeed. Searle {1969) presents an anal- ysis of referring as a speech act and dismisses many uses of noun phrases as nonreferring. Such nonreferring noun phrases occur very frequently, and the considerations that underly their planning share much in common with those that underly actual referring. Therefore, the concept acti- vation action provides a suitable generalization that allows a plan-based treatment of many more uses of noun phrases. 2 Research Objectives The analysis presented in this paper represents one of the first steps toward a plan-based account of definite and in- definite noun phrases. Ideally, such an account would (1) provide a semantics for noun phrases, (2) define an actions like "uttering a definite noun phrase, ~ and (3) provide an analysis that shows how the speaker's intentions follow di- rectly from the semantics of the noun phrase he utters, plus conditions on mutual knowledge and general princi- ples of rationality. This program is very much in the spirit of the analysis of illocutionary acts provided by Cohen and Levesque (1980), who demonstrate how illocutionary acts can be defined in terms of the kinds of inferences made, given a semantic analysis of an utterance, facts about mu- tual knowledge, and general principles of rational behavior. Cohen ( 1984} provided such an analysis for referring ac- tions by postulating a semantics for the definite determiner that would give the semantics of a definite noun phrase as a request to identify the referent of a description. This analysis would be impossible to extend to the more gen- eral concept activation actions, because, in some cases, the speaker intends that the hearer not identify the denota- tion of the description, even when a definite noun phrase is used. A complete analysis along these lines that subsumes both referring and nonreferring noun phrases has yet to be worked out. As an intermediate step toward this ultimate goal, we shall propose a taxonomy of concept activation actions that convey the various intentions a speaker may have with re- spect to a hearer and a description. This taxonomy is of theoretical interest, because it characterizes differences and similarities among uses of noun phrases that current theo- ries do not characterize. It is also of practical interest for utterance planning, because the set of actions to be pro- posed provides a useful level of abstraction for the reason- ing processes of an utterance-planning system. For exam- ple. certain planning strategies such as action subsumption {Appelt. 1985} axe applicable only to certain kinds of con- cept activation actions and not to others. Therefore, even if the complete plan-based analysis of noun ph~ is worked out, the taxonomy of actions presented here will still be of practical importance. Until an analysis like Cohen and Levesque's is worked out, the concept activation actions here will be treated like illocutionary acts in a speech-act theory. When a hearer understands an utterance, he reasons about whether it con- stitutes an assertion, a request, a warning, etc. Therefore, understanding one of the definite or indefinite noun phrases under consideration in this paper is assumed to entail recog- nition of what concept activation action the speaker intends to perform. 3 Summary of Actions Underlying Noun Phrases There are many distinctions that one could draw between noun phrases, only some of which are relevant to planning. For example, one could distinguish noun phrases that refer to amorphous substances from those that refer to discrete entities. Such a distinction may have some valid motiva- tion, but it is not necessarily so from the standpoint of planning. It would be well motivated only if there were a clear difference in the preconditions and effects of the concept activation actions-underlying mass terms, or in the strategy for the selection of descriptors. This does not seem to be the case for mass versus discrete entities. However, there are two criteria that clearly affect the rel- evant preconditions, intended effects, and planning strate- gies of concept activation actions: {l) whether the speaker intends that the hearer identify the denotation of the de- scription, and {2) how much mutual knowledge the speaker and hearer share about the description's denotation. The first criterion is what {roughly} distinguishes referring noun phrases from nonreferring noun phrases. The necessity, of the hearer performing the identification constrains the de- scription to be one that facilitates the hearer's formulation of a plan to do so. The second criterion is the knowledge that is shared by the speaker and the hearer at the time of the utter- ance. Planning strategies are influenced by whether or not the speaker and hearer mutually believe appropriate facts about the intended referent. In particular, if the speaker and hearer share enough knowledge about the descriptions denotation and the contextual situation, it may be possible for the hearer to recognize the speakers intenrt,~as using only a subset of the descriptors in the n~mn phrase's de- scription. In such a situation, the speaker ran augment the description with additional descriptors for tile purpose of informing the hearer that they are true ,ff the denota- tion of the other part of the description. Such a ~trate~' is called action anbaurnption, {Appeit, i985). The action sub- sumption strategy cannot be used with concept activation actions that are not based on shared knowledge. Since there are two dimensions relevant to ,:haracteriz- ing concept activation actions, it is possible to define four 199 Ide~t * r ,cat *an Intent*on NS! Tyuw oi r nov ohrDl~ • i • efgc~t*a|, gttr ,bog, vq. dellm*te mtd *ndefi~|ta. PIM*I*S sir iles$~ * UNC.I d~.cr,ol*e* ~* • deer, ' *¢o¢**~ alan ~bsu~t e ** *ntemt **ns reCOSn*Sed **lv e#te~ no ~h~retl ,dm~t~f~¢~t*e* is caN|It* Tvae ef .e~ ~hrwses T~ ef noun DhrsSe~ Rif~mt*l|. ~w~l~irtntte. flttrtb~ttve, def*~ltt ~ • ndeftn~ to. p|m~t ,~1 str e|l~J~t P|m,~J st~et4~J~ e |afe¢lmt*ve ~r~Gt lea ~ff ~¢tm~t d~rscr * Og * NSNI 5NI s| fvae or noun O~r~e~ hr.=At,el, d*¢,~,tu ,nciud,*~ dGaenstret*ves. P I~n *n*J strmteSY* £ff,cie~t ,dent,rv,ng dncrl:s*~ ~bsweot, ** bess* hie ~ e d Figure I: Four Types of Concept Activation Actions actions, as illustrated in Figure 1. These actions are SI (shared concept activation with identification intention), NSI (nonshared concept activation with identification in- tention), SNI (shared concept activation with no identifi- cation intention), and NSNI (nonshared concept activation with no identification intention.) Each action has distinct preconditions, effects, and an assocated planning strategy. 4 Mutual Knowledge and Identification The two most important considerations in planning con- cept activation actions axe (1) whether or not the speaker intends the hearer to identify the referent of the description and (2} what knowledge about the description's possible de* notations the speaker and hearer share. What it means for an agent to "identify ~ the referent of a description is a topic of considerable conplexity. Seaxle (1969) sta~es that "So identification ... rests squarely on the speaker's ability to supply an expression ... which is satisfied uniquely by the object to which he intends to re- fer." What counts as an identifying description depends on the purpose for which the agent is identifying the descrip- tion:s denotation. For example, the description that one must know to carry out a plan requiring the identification of ~John's residence" may be quite different depending on whether one is going to visit him, or mail him a letter. If I want to speak to a guest at a Halloween party, I need only a description capable of distinguishing him from the other guests at the party, not to know who it really is wearing the disguise. Identification of the denotation of a term D is therefore defined as finding another term /P (called a prima facie (PF) identifiable term) that has the same denotation as/~ according to the bearer's knowledge, but that meets certain syntactic criteria for being the "right kind" of term. It is stipulated that any two distinct PF identifiable terms must denote different individuals in the same situation. The sim- plest criterion for PF identifiability that meets this require- ment is that the term be a 81andsrd name. Because each standard name denotes the same individual in any context, knowing that a particular standard name is equivalent to a term implies that the agent knows the denotation of the term. Furthermore, any two distinct standard names de- note different individuals. The standard name approach was taken by the KAMP system. The standard name assumption has two difficul- ties. First, it is extremely implausible to believe that an agent has a unique name for anything that can be referred to. Also, knowing a standard name implies having made an absolute identification. Therefore, to refer to a guest at a costume party, it is a consequence of successful identifi- cation that the speaker and the hearer mutually know the identity of the person in the disguise, which is obviously too strong a condition for successful reference. Developing adequate criteria for PF identifiable terms is an important research problem; however, none of the points in this paper depend on what the criteria for PF identifiability are. The importance of mutual belief to the successful use of referring expressions was demonstrated by Clark and Mar- shall (1981). It was shown by a series of rather complex examples that, if one did not observe an infinite number of preconditions of the form "A believes that B believes that A believes that B believes ... description P applies to R," then it is impossible to guarantee that description D can be used to refer felicitously to R, because it would always be possible to construct some set of circumstances in which the hearer would believe the speaker intended to refer to something else. Perrau[t and Cohen (1981) show that a s}ightly weaker condition is adequate: the mutual belief preconditions have to hold in all but a finite number of cases. Nadathur and Josh| (1983) adopt a s(rat%oS" that amounts to assuming that if D is believed to apply to R. then it is also mutually believed to apply to R unless there is reason to believe that it is not. The case for some form of mutual belief ~ a prerequisite to a successful referring action is strong; however, spoakers oRen use noun phrases that should be analyzed .~s r,~fcren- tial in which it is clear from the context that. n(~l ~ml.v i~ the description not mutually believed tc~ h~d,l ~)f 'he }nt,m,led referent, but the speaker knows this is the ,':me ~vhcn he plans the utterance. For example, consider a situation in which the speaker is giving instructions to the hearer and says (5) Turn left at the third block past the ~toplight. This utterance might be reasonable even if the hearer had never been to the intersection in question and the speaker and hearer have no mutual belief at the time of the utter- ance about the location to which the speaker intends to 200 refer. The hearer knows that the speaker can formulate a plan at the time of the utterance that will guarantee that he will have identified the referent of the description at the time that it is needed. This observation is one motivation for the distinction dra~n along the horizontal axis of Figure 1. There are really' two kinds of definite referring actions: one is that in which the precondition is mutual knowledge of a descrip- tion. and the other in which there is mutual knowledge of a plan incorporating the description to acquire additional knowledge. 5 Definitions of Concept Activation Actions This section discusses each of the four major types of con- ,-ept activation actions outlined in Section 3. The defi- nitions of the actions are not stated rigorously, but are intended to give the reader an intuitive understanding of their preconditions and effects, and how they differ from each other. 5.1 Shared Concept Activation with Identification Intention (SI) These actions are the only type of concept activation ac- ti,ms that were considered in the earlier KAMP research. SI actions are used most frequently in referring to past events and objects that are not perceptually accessible to the hearer. In such situations, the hearer can perform few, if any. actions to acquire more knowledge that would enable • him to identify' the referent of a description whose referent wa~ not already mutually known at the time of the utter- am'l,. SI Action: The speaker S performs action SI with hearer H and term D Preconditions: There is some term D' which is PF identi- fiable; S and H mutually believe that Denotation(D} = Denotation(O'). Effect: H knows that S intends that the term D' be active, The preconditions of this action depend strictly on the mutual belief of the speaker and the hearer at the time of the utterance. The noun phrase in a sentence such aa (6) Use the same wrench you used to unfasten the pump. must arise from this type of action in normal situations of its use. because the description, based on a past event, does not facilitate any kind of plan for acquiring more informa- tion. When planning an utterance, the speaker knows the PF identifiable term, and his problem is to get the hearer to recognize the same term. Consistency with the Gricean maxim of quantity requires that the planned description be as simple or efficient as possible. There are several ways to measure the complexity of a description, including the number of descriptors involved and the ease with which these descriptors can be incorporated into the utterance. When planning an SI action, the planner's most important task is reasoning about the efficiency of the description. Concept activation actions that involve shared belief about the denotation of the description at the time of the utterance have the property that they are candidates for action subsumption. Because the information required to perform the identification can be communicated through a subset of the descriptors in the noun phra.se, or extralin- guistically through pointing actions or strong contextual cues, and because the precondition Denotation(D) = Denotation(D') is known to hold, the speaker can use the additional de- scriptors to inform the hearer that the dos,'riptors are true of the intended referent. 5.2 Nonshared Concept Activation with Identification Intention (NSI) This action is what a speaker does when he wants to refer to an object that is not known to the hearer, or for which the speaker and hearer do not mutually believe enough proper- ties at the time of the utterance so that identification can take place based on mutual knowledge. NSI Action: The speaker S performs acti,,n NSI with hearer H and term P. Preconditions: S and H mutually believe that there is some plan P such that, if H executes P. then in the re- sulting state, there exists a ['F identifiable term P' such that H knows that Denotation(Pl = Dem;tation(DI), and 5" intends that H execute P. Eff~:ts: H knows that S intends that /) be active. The NSl action is used in situations in which ,'he speaker and hearer do not mutually know the denotation of the de- scription, yet. to realize the perlocutionary eff,.,:ts of the utterance, the hearer must be able to identify tile speaker'~ intended referent. This lack of mutual knowledge may oc- cur if the speaker can identify the referent from the de- scription, but the hearer cannot, as is most likely the case in example (5). Also, as is the case in example (7), the speaker may not be able to identify the referent, but nev- ertheless knows of a plan the hearer can execute that will lead to. the identification of the referent at the appropriate time. (7) Get me the largest tomato from the garden. 201 The speaker of sentence (7) is uttering an attributive de- scription, because he is probably not referring to a partic- ular tomato, but to whatever tomato fits his description. However, it is conceivable that he had a particular tomato in mind, and chose that description because he believed it would lead to the best plan for the hearer to identify it, and would, in that case, be referential. One can see from this example that the referential-attributive distinction is orthogonal to the distinctions motivated by utterance plan- ning, In both referential and attributive cases, the speaker knows that the right conditions on mutual knowledge are not satisfied for an SI action, and plans a description that he knows the hearer can use successfully. It does not mat- ter to the planner whether the description is referential or attributive -- the same reasoning takes place in both cases with the same results. The NSI action depends on the hearer's ability to find the plan P. Therefore, the speaker must plan to furnish infor- mation as part of P that will make it as easy as possible for the hearer to formulate his plan. If the hearer has enough infomation to formulate P, then P is a useful description. It is possible for a speaker to formulate a description that, although it denotes the individual the speaker has in mind, is not useful because there is no plan the hearer can formu- late to take advantage of the description. An example of such a nonuseful description would be if S and H are riding a bus, H asks at what stop he should get off, to which S replies tone stop before I do/ The description "one stop before [ do, ~ while being true of a unique location, is not a useful description, assuming that the hearer has recourse only to observing the speaker's actions. The reader may wonder if an SI action can be regarded as a degenerate case of the NSI action. In the case of the NSI action, the speaker and hearer mutually know of a plan that will result in identification of the intended referent, and in the case of the S[ action, the plan is simply to do nothing, because the referent of the term is already mutually known. This is not the case, because the precondition of the SI action is that the speaker and hearer mutually believe both the description in the noun phrase and the PF identifiable description. In the case of the NSI action, the speaker and he=rer mutually believe that executing plan P will result in the hearer acquiring the required information, but, since only the hearer is actually executing the plan, the speaker and hearer may never meet the mutual belief condition of the SI action. Therefore it is possible to have an NSI action with a null plan, which is not equivalent to an SI action with the same description. For example, suppose a speaker wants a son to deliver an envelope to his father, and makes the request (8) Give this to your father. a~though the speaker does not know who the son's father is. In sentence (8) the speaker is using the description at- tributively because he has no particular individual in mind, just whoever fits the description. Furthermore, the speaker assumes that the son is capable of identifying his own fa- ther on the basis of knowledge he already has; therefore the plan for the hearer to identify the description is to do noth- ing. This is different from the SI action, in which there is some individual who is mutually believed to be the hearer's father. 5.3 Shared Concept Activation with No Identification Intention (SNI) When a speaker performs an SNI action, he provides a de- scription, but he does not intend that the hearer try to identify its denotation. Therefore, the SNI action is not a referring action, because identification is an essential part of referring. The SNI action is used when a speaker has a belief involving some individual for whom he has a descrip- tion, but not a PF identifiable description, and intends that the hearer hold the same belief. SNI Action: The speaker S performs action SNI with hearer H and term P. Preconditions: S and H mutually believe that there ex- ists an individual R such that Denotation(P) = R. Effecti: H knows that S intends that D be active. The primary effect of the SNI action is the same as the NSI action: it activates the term corresponding to the de- scription P. However, because the preconditions are dif- ferent, no intention to identify the description is communi- cated, ~md the ultimate effect of the action on the hearer's beliefs and intentions is therefore quite different. This type of action underlies the use of an attributive description when no identification is intended. This type of action has been discussed in the literature (Donellan, 1966: Kripke, 1977) with the situation of two people discovering Smith's badly mutilated body, and one saying "The man who mur- dered Smith is insane." In this situation, the speaker is informing the hearer of something about the referent of the description ~man who murdered Smith." but does not know who this individual is, nor does he intend that the hearer identify him. However, there are conditions on the mutual belief of the speaker and hearer for the utterance to make sense. The speaker mad hearer must mutually beiieve that the dead man is Smith, that he was in fact murdered, and that it was a man who killed him. 5.4 Nonshared Concept Activation with No Identification Intention (NSNI). N$NI Action: The speaker S performs action NSNI with hearer H and term D. Preconditions: No mutual belief preconditions. 202 Effects: H knows that S intends that the term D be active. Unlike the SNI action, the NSNI action does not require that the speaker and hearer share any knowledge about the denotation of the description prior to the utterance. This action is used by a speaker to introduce a new individual to the discourse, without intending that the hearer associate that individual with any that he already knows about. For example, a speaker says, =I met an old friend from high school yesterday." The speaker does not assume that the hearer shares any knowledge of his old high school friends, nor does he intend the hearer to identify the person he is talking about. The most important consideration for the planner in this case is to include enough information in the description D to serve the speaker's purpose in the rest of tile discmtrse. NSNl actions are most frequently realized by referential indefinite noun phrases (Fodor and Sag, 1982}. Such a noun phrase is indefinite, but it is clear from the context that there is some particular individual that is denoted by the description. 6 Summary This paper has examined a class of actions called concept activation actions, in which a speaker communicates the in- tent that the hearer recognize a particular description. The performance of one of these actions consists of uttering a noun phrase, either in isolation, or as part of a sentence. Therefore, the noun phrases resulting from the performance of a concept activation action are, in some sense, referen- tial. even though neither the speaker nor the hearer may kn,,w the noun phrase's denotation, either at the i'ime of the utterance or subsequently. While the four actions discussed in this paper account for a vor.v important class of noun phrzses, the class by n~, means exhausts all possibilities, and further rf'search is needed to understand the pragmatic cnnsi~lerations rel- want r,~ other noun phrases. Some other noun-phrase ex- amples of were discussed earlier, including quantiticational noun phrases and predicate nominals. Generics, and bare plurals will require additional analysis. There is also an ex- tremely =mportant class of concept activation actions that ha~ qol been discussed here, namely coreferrin9 actions, whi,'h ,'n~ail the activation of terms that. have already been irlt r,~du,'od to the discourse. This analysis of the actions underlying the production of noun phrases is of particular importance to utterance planning. Planning requires a characterization of actions that describes what their effects are, when they are appli- cable, and what strategies are available for their expansion. "['he four actions described in this paper fill an important gap that has been left open in previous utterance-planning research. Acknowledgements This research was supported, in part, by the National Sci- ence Foundation under grant DCR-8407238 and was made possible, in part, by a gift from the System Development Foundation to SRI International as part of a coordinated research program with the Center for the Study of Lan- guage and Informatiou at Stanford University. The author is grateful to Barbara Grosz and Ray Perrault for comments on earlier drafts of this paper. References Appelt, D. E., Planning English Sentences, Cambridge Uni- versity Press. Cambridge, UK (1985). Clark. H. and C. Marshall, "Definite Reference and Mutual Knowledge," in Joshi, Sag, and Webber (eds.} Element.~ of Discourse Understandin 9, Cambridge University Press, Cambridge, UK (1981) pp. IO-63. Cohen, P. R., and C. R. Perrault, "Elements of a Plan Based Theory of Speech Acts," Cognitive Science 3 (1979) pp. 177-212. Cohen, P. R., "Referring ,an Requesting, ~ Proceedinqs of the Tenth International Conference on Computational Linguis- tics (1984) pp. 307--221. Cohen, P. R. and II. Levesque, "Speech Acts and the Recog- nition of Shared Plans," Proceedings of the Third Biennial Conference, Canadian Society for Computational Studies of Intelligence (1980). Cohen. P.R., "Pragmatics. Speaker-Reference, and the Modality of C.rmmmnication," Computational Linguistics 10 (198-1) pp. 97- 1.16. Donetlan. K., "Reference and Definite Descriptions." Philo- sophical Review, 75 (19G6) pp. 281-3f}.1. l:.dor, .l. anti I. Sag, ":Referential and O uantificatioual [n- definites." Linguistics and Philosophy 5 (1982) pp. 3.-3-1- 398. Kripke, S., =Speaker Reference and Semantic Reference." in French, Uehling, and Wettstein tods.) (',)~temporary Perspectives in the Philosophy ,f Lan~j,age. I.'ni:'ersity ,~f Minnesota Press..Minneap~,lis. MS. (1977} pp 6-27. Nadathur. G. an,l A..l~shi. "'Mutual 13,qi,,f~ in (',,nvema- rional Systems: Tlmir Rob- in Referring E::pre~i,m~." Pra- ceedin,js ,ff the Eighth International .l,znt (',~n.&rcnce on Artificial [ntelliyence. {198,q1 pp. 603-605. Perrault, C.R., and P.R. Cohen, "It's for Your Own Good: A Note on Inaccurate Reference." in .lo~hi. Sag. and Web- ber (eds.) Element.~ of Di.~rour.~e ['nderstar, din 9. Cam- bridge University Press, Cambridge. I.; K t 1981 !. Searle. J.R., Speech Acts. Cambridge l.'niversiLv Press. Cambridge, UK (1969). 203
1985
25
REPAIRING REFERENCE IDENTIFICATION FAILURES BY RELAXATION Bradley A. Goodman BBN Laboratories I0 Moulton Street Cambridge. Mass. 02238 ABSTRACT The goal of thls work is the enrichment of human-machlne mteractIons in a natural language envlronment. 1 We want to provide a framework less restrictive than earlier ones by allowing a speaker leeway tn forming an utterance about a task and in determining the conversational vehicle to deliver it, A speaker and listener cannot be assured to have the same beliefs, contexts, backgrounds or goals at each point in a conversation. As a result, dlfflcultles and mistakes arise when a listener interprets a speakers utterance. These mistakes can lead to various kinds of mlsunderstandlngs between speaker and hstener. including reference failures or failure to understand the speaker's mtentlon. We call these mtsunderstandlngs mlscommunmatlon Such m~stakes constitute a klnd of "ill-formed" input that can slow down end possibly break down communication. Our goal ~s to recognize and Isolate such mlscommunlcattons and circumvent them. Thls paper will hlghhght a particular class of mlscommunlcatlon - reference problems - by descrlbmg a case study, includlng techniques for avoldlng failures of reference I Introduction Cohen, Perrault and Allen showed in thelr paper "Beyond Question Answering" [8~ that ",.. users of cluestlon-answerzng systems expect them to do more than just answer isolated questions -- they expect systems to engage tn conversation. In doing ~o, the system ts expected to allow users to be less than meticulously hteral ~n conveying their zntentlons, and tt is expected to make hnguxstlc and pragmatic use of the previous discourse." Following in thelr footsteps, we want to build robust natural language processing systems that can detect and recover from mlsc~mmunlcatton. The development of such systems requires s study on how people communicate and how they recover from problems In communication. This paper summarizes the results of a dissertation [13] that tnvestlgates the kinds of mlscommunlcatlon that occur in human communication with a special emphasis on reference prooiems, i.e.. problems a listener has determining whom or what a speaker ts talking about. We have written computer programs and algorithms that demonstrate h~w one could handle such problems m IThis reseorcn was suDDorted in port by the Oefenee Advonce4 Reseorch Pro~ect Aqency under ¢ontr=ct Neee14--77-- C-~378. the context of a natural language understand2ng system. The study of mzscommunlcatlon is a necessary task wlthm such a context since any computer capable of communlcat~ng with humans tn natural language must be tolerant of the tmprecIse, lll-devlsed or complex utterances that people often use. Our current research [25, 26] views most dialogues as being cooperatlve and goal directed, l,e.. a speaker and hstener work together to achieve a common goal. The interpretation of an utterance involves Identifying the underlying plan or goal that the utterance reflects [5. I, 23]. Thls plan, however, is rarely, d ever, obvious at the surface sentence level. A central issue In the interpretation of utterances ts the transformation of sequences of imprecise, zll- devised or complex utterances into well-speclhed plans that might be carried out by dialogue participants. Within thls context, mlscommunlcatlon can occur. We ere particularly concerned with cases of mxscommunlcatlon from the heater's viewpoint, such as when the hearer is mattentlve to. confused about, or misled about the zntentlons of the speaker. In ordinary exchanges speakers usually make assumptions regarding what thelr listeners know about a topic of discussion. They w111 leave out details thought to be superfluous [2. 19]. Since the speaker really does not know exactly what a listener knows about a topic, tt ts easy to make statements that can be misinterpreted or not understood by the listener because not enough details were presented. One principal source of trouble Is the description constructed by the speaker to refer to an actual object in the world. The descmptlon can be tmpreclse, confused, ambiguous or over!v speclflc. It might be interpreted under the wrong context. This leads to dlfflculty for the hstener when figuring out what oblect ~s being described, that Is. ref.erence identification errors. Such descriptions are "all- formed" input, the blame for ill-formedness may lie partly with the speaker and partly with the listener The speaker may have been sloppy or not taken the hearer into consideration, the listener may be either remiss or unwilling to admit he can't understand the speaker and to ask the speaker for clarification, or may slmply feel that he has understood when he zn fact has not. Thls work ts part of an on-going effort to develop a reference Identlfzcatmn and plan recognition mechanism that can exhibit more "human-hke ' tolerance of such utterances. Our goal zs to build a more robust system that can handle errorful utterances, and ~hat can be incorporated in exlstlng systems. As a start, we have concentrated on reference tdentlflcatzon. In conversation people use imperfect descriptions to communicate about objects; sometimes their partners succeed zn understanding and occasionally they fail. Any computer hoping to play the part of a listener must be capable of taking what the 204 speaker says and either deleting, adapting or clarifying it. We are developing a theory of the use of extensional descrlptlons that will help explam how people successfully use such imperfect descriptions. We call thls the theory of reference mlscommunlcation Section 2 of this paper highlights some aspects of normal communication and then provides a general discussion on the types of miscommunlcatlon that occur In conversation, concentrating primarily on reference problems and motivating many of them with Illustrative protocols. Section 3 presents possible ways around some of the problems of miscommunxcation in reference. Motivated there is a partial Implementation of a reference mechanism that attempts to overcome many reference problems. We are following the task-omented paradigm of Grosz [14] since it Is easy to study (through videotapes). It places the world In front of you (a primarily extensional world), and It limits the dlscusslon whlle still providing a rlch environment for complex descriptions. The task chosen as the target for the system Is the assembly of a toy water pump. The water pump Is reasonably complex, containing four subassemblies that are built from plastic tubes, nozzles, valves, plungers, and caps that can be screwed or pushed together. A large corpus of dialogues concerning thls task was collected by Cohen (see [7. 8. 9]). These dialogues contained instructions from an "expert" to an "apprentice" that explain the assembly of the toy water pump, Both participants were working to achieve a common goal - the successful assembly of the pump Thls domain Is rlch m perceptual information, allowing for complex descriptions of elements in it. The data provide examples of imprecision, confusion, and ambiguity as we!l as attempts to correct these problems The following exchange exemplifies one such situation. Here A Is instructing J to assemble part of the water pump. Refer to Figure l(a) for a picture of the pump. A and J are communicating verbally but neither can see the other. (The bracketed text In the excerpt tells what was actually occurring while each utterance was spoken.) Notlce the complexity of the speaker's descriptions and the resultant processing required by the listener, Thls dialogue illustrates when listeners repair the speakers description in order to flnd a referent, when they repair their mztlal reference choice once they are given more information, and when they fall t ~. choose a proper referent In Linp 7, A :[,=scribes the two holes on the BAjEVALVE as "the httle hoie" J must repair the descrlptlon, reahzmg that A doesnt really mean "one hole but is referring to t,~e 'two' holes. J apparently does this since he doesnt complain about as description and correctly attaches the BASEVALVE to the TUBEBASE Figure lib) shows the configuration of the pump after the TUBEBASE is attached to the MAINTUBE "n Lme I0, [n Lme 13. J interprets "a red plastic piece" to refer to the .VOZZLE When A adds the relative clause "that has four gi=mos on it." J is forced to drop the NOZZLE as the referent and to se{ect the SLLDEV~LVE In Lmes i7 and 18, A'S description "the other--the open part of the maln tube. the lower valve" is ambiguous, and J selects the wrong slte, namely the TUBEBAEE, in which to insert the SLIDEVALVE. Since the SL/DEVALVE flts, J doesn't detect any trouble. L~nes 20 and 21 keep I from thinking that something is wrong because the part fits loosely, In L~nes 27 and 28, J indicates that A dld not glve him enough znformatlon to perform the requested action. In Lme 30. J further compounds the error in Line 18 by putting the SPOUT on the TUBEBASE. Excerpt 1 (Telephone) A. I. Now there's a blue cap [J grabs the TUBEBASE] 2. that has two little teeth sticking 3. out of the bottom of it. J: 4. Yeah. A. 5. Okay On that take the 6. brlght shocking pink piece of plastic [J takes BASEVALVE] 7. and stick the little hole over the teeth. [J starts to install the BASEVALVE. backs off, looks at it again and then goes ahead and installs it] J. 8 Okay A: 9 Now screw that blue cap onto I0. the bottom of the maln tube. [J screws TUBEBASE onto MAINTUBE] J. 11. Okay A. 12 Now. there's a-- 13. a red plastic piece [J starts for NOZZLE] 14 that has four gizmos on It. [J switches to SLIDEVALVE] J. 15 Yes. A 16 Okay Put the ungtzmoed end In the uh 17 the o t h e r - - t h e open 18 part of the maln tube, the lower valve [3 puts SLIDEVALVE into hole in TUBEBASE, but A meant OUTLET2 of MAINTUBE] I 19 All right A 20 !t ;ust hts loosely It .doesnt '~I have to f'.t right. Okay. then take .~2 the clear plastic elbow ]omt [J takes SPOUT] J 23 All right A $4 And put tt over the bottom opening, too. [J trees installing SPOUT on TI/BEBASE] l -~ Okay a. 28. Okay Now. take the-- 27 Which end am I supposed to put It over') 28 Do you know ° A. -:'9 Put t h e - - p u t t h e - - t h e big e n d - - 30 the blg end over it. [J pushes big end of SPOUT on TUBEBASE. twlstlng zt to force it on] 205 NO:zZe Figure I: I,~.d) I' (a) (b) The Toy Water Pump C 2 Miscommunication People must and do manage to resolve lots of (potentaal) mascommumcataon In everyday conversataon. Much of it as resolved subconscaously wlth the hstener unaware that anything is wrong, Other mlscommumcatlon is resolved wath the listener actively deleting or replacang mformataon m the speakers utterance until It flts the current context. Sometimes thls resolutlon Is postponed until the questlonable part of the utterance is actually needed. Shll. when all these fail. the hstener can ask the speaker to clarlfy what was said. 2 There are many aspects of an utterance that the hstener can become confused about and that can lead to mascommunacatton. The hstener can become confused about what the speaker intends for the referents, the actaons, and the goals described by the utterance, Confuslons often appear to result from confhct between the current state of the conversation. the overall goal of the speaker, or the manner In which the speaker presented the anformatlon. However, when the hstener steps back and is able to discover what k~nd of confuslon ~s occurring, then the confusion can qulte possibly be resolved. 2.1 Causes of mlscommunication Thls sectaon attempts to motlvate a paradlgm for the kinds of conversation that we studled and traes to point out places m the paradlgm that leave room for mlscommumcatlon. ~'.1.1 Effects of the structure of task-oriented dialogues Task-oriented conversatlons have a speclfic goal to be achleved: the performance of a task (e.g.. [14]). The partlclpants in the dlalogue can have the same skill level and they can slmply work together to accomplish the task; or one of them, the expert, could know more and could direct the other, the apprentlce. to perform the task. We have concentrated prlmarlly on the latter case - due to the protocols that we examlned - but many of our observations can be generahzed to the former case, too. We will refer to thls as the apprentlce-expert domaln. The vlewpomts of the expert and apprentlce differ greatly In apprentlce-expert exchanges. The expert, having an understandlng of the functlonahty of the elements in the task. has more of a feel for how the elements work together, how they go together, and how the indlvldual elements can be used. The apprentlce normally has no such knowledge and must base hls declslons on perceptual features such as shape [15]. The structure of the task affects the structure of the dlalogue [14}. partlcularly through the center of attentlon of the expert and apprentlce. Thls is the phenomenon called focus [14. 20. 24]. whlch, in task- orlented dlalogues Is a very real and operational thlng (e.g., focus is used In resolving anaphorac references). Shafts ~n focus correspond dlrectly to the task, ats subtasks, the oblects an a task and the subpleces of each object Focus and focus shifts are governed by many rules [14. :~0, 24] Confusaon may result when expected shafts do not take place. For example. If the expert changes focus to an object but never discusses Its subpaeces ~such as an obvaous attachment surface) or never bothers to talk about the object reasonably soon after its antroductlon (Le., between the tame of ~ts mtroductlon and its use. without digressing in a well- structured way In between (see [20])), then the apprentlce may become confused, leavang hlm r~pe for mlscommunlcatlon. The reverse anfluence between focus and oblects can lead to trouble, too. A shzft In focus by the expert that does not have a manHestatlon In the apprentlce's world wall also perplex the apprentice Focus also influences how descr:ptlons are formed [15, 2]. The level of detail requlred in a description depends directly on the elements currently highlighted by the focus If the oblect to be descrabed Is samflar to other element~ m focus, the expert must be more speclhc m the formulation of the descraptlon or may conslder shlftmg focus away from the posslbly ambiguous objects to one where the amblgulty wont occur. 2.2 Consequences of miscommunicatlon In thls section we will make It clear that people do m:scommunlcate and yet they often manage to flx thlngs. We will look at speclfic forms of mlscommunlcatlon and descrlbe ways to detect them. We will hzghhght relatlonsh;ps between different mlscommunzcat;on problems but won't necessarzly demonstrate ways to resolve each of them. 2An analysis of clarification suodialogues can be found ;n [17). 206 2.2.1 Instances of mtscommun/cation There are many ways hearers can get confused during a conversation. Figure 2 outlines some of them that were derived from analyzing the water pump protocols. This section defines and illustrates many of them through numerous excerpts. Each excerpt is marked in parentheses to show what modality of communication was used (see [9] for a description about the collection of these excerpts). Each bracketed portion of the excerpt explains what was occurring at that point in the dialogue. The confusions themselves, coupled with the description at the end of this section on how to recognize when one of them is occurring, provides motivation for the use of the algorithm outlined in Section 3 as a means for repairing communication problems. We will only discuss referent confusion tn this paper. The other forms of confusion - Action. Goal, and Cogmtive Load - are described in [11. 13]. Another categorization of confusmns that lead to conversation failure can be found in [22]. • Figure 2: A taxonomy of confusmns Referent ~onfuslon occurs when the listener is unable to correctly determine what the speaker is referring to with a particular descrlptmn. [t occurs when the descriptions In the utterance are ambiguous or imprecise, when there IS confusion between the speaker and listener about what the current focus or context Is, or when the descriptions in the utterance are either incorrect or incompatible with the current or global context. Erroneous Specificity Ambiguous (and. thus, imprecise) descnptxons can cause confusion about the referent. Excerpt 2 below illustrates a case where the speaker's description is underspecxfled - it does not provide enough dated to prune the set of possible referents down to one. Excerpt 2 (Pace-to-Face) S 1. And now take the little red 3. peg, [P takes PLUG] 3. Yes, 4. and place it xn the hole at the 5. green end. [P starts to put PLUG into OUTLETR of MAINTUBE] 6. no 7. the--in the green thing [P puts PLUG into green part of PLUNGER] P: 8. Okay. In Line 4 and 5, S describes the location to place a peg into a hole by giving spatial information. Since the location is given relative to another location by "in the hole at the green end", it defines a region where the peg might go instead of a specific location. In this particular case, there are three possible holes to choose from that are near the green end. The listener chooses one - the wrong one - and inserts the peg into it. Because this dialogue took place face to face, S is able to correct the ambiguity in Lines 6 and 7. A speaker's description can be imprecise in several possible ways. (1) It may contain features that do not readily apply in the domain. In fine 3, Excerpt 3, the feature "funny" has no relevance to the listener. It is not until A provides a fuller description in Lines 5 to 8 that E is able to select the proper piece. (2) It may use a vague head noun coupled with few or no feature values (and context alone does not necessarily suffice to distinguish the object). In Excerpt 4, Line 9, "attachment" is vague because all objects in the domain are attachable parts. The expert's use of "attachment" was most likely to signal the action the apprentice can expect to take next. The use of the feature value "clear'* provides little benefit either because three clear, unused parts exist. The size descriptor "little" prunes this set of possible referents down to two contenders. (3) Enough feature values are provided but at least one value is too vague leading to trouble. In Excerpt 5, Line 3, the use of the attribute value "rounded" to describe the shape does not sufficiently reduce the set of four possible referents (though, in this particular instance, A correctly identifies it) because the term is applicable to numerous parts In the dommn. A more precise shape descriptor such as "bell-shaped" or "cylindrical" would have been more beneficial to the listener, Excerpt 3 (Telephone) E: I. All right. 2. Now. 3. There's another funny little 4. red thing, a [A is confused, examines both NOZZLE SX.,mr-VALVE ] 5. little teeny red thing that's 6. some--should be somewhere on 7. the desk, that has um--there's 8. like teeth on one end. [E takes SLIDEVALVE] and A: 9. Okay. E: 10. It's a funny-loo--hollow, 11. hollow projection on one end 12. and then teeth on the other. Excerpt 4 (Teletype) A: I. take the red thing with the 2. prongs on it 3. and fit it onto the other hole 4. of the cylinder 5. so that the prongs are 6. sticking out 2O7 R: 7. ok A: 8. now take the clear little 9. attachment 10. and put on the hole where you 11. just put the red cap on 12. make sure it points 13. upward R: 14. ok F, xeerpt 5 (Teletype) S; I. Ok, 2. put the red nozzle on the outlet 3. of the rounded clear chamber 4. ok? A: 5. got it. Improper Focus Focus confusion can occur when the speaker sets up one focus and then proceeds with another one without letting the listener know of the switch (i.e., a focus shift occurs without any indication). An opposite phenomenon can also happen - the listener may feel that a focus shift has taken place when the speaker actually never intended one. These really are very similar - one Is viewed more strongly from the perspective of the speaker and the other from the listener. Excerpt 6 below lUustrates an mstance of the first type of focus confusion. In the excerpt, the speaker (S) shifts focus without notifying the listener (P) of the switch. As the excerpt begins, P ,s holding the TUBEBASE. S provides in Lines 1 to 16 mstructzons for P to attach the CAP and the SPOUT to outlets OUTLETI and OUTLET2, respectively, on the MAINTUSE. Upon P's successful completion of these attachments. S switches focus m Lines 17 to 20 to the TUSESASE assembly and requests P to screw tt on to the bottom of the M,e/NTUSE. White P completes the task. S realizes she left out a step in the assembly - the placement of the SLIDEVALVE into OUTLET2 of the M,eJNTUSE before the SPOUT ts placed over the same outlet. S attempts to correct her mistake by requesting P to remove "the pies "~ piece in ~nes 22 and 23. Since S never indicated a shift in focus from the TUSESASE back to the IPOUT, P mterprets "the pies" to refer to the TUSESASE. Excerpt 6 (Face-to-Face) S 1. And place 2. the blue cap that's left [P takes CAP] 3. on the side holes that are 3The whole ward here is "pleetic." People in general tend to be good ot proceedinq before heorin 9 the whole utteronce or even the whole word. 4. on the cylinder, [P lays down TUBEBASE] 5. the side hole that is farthest 6. from the green end. [P puts CAP on OUTLET! of MAINTUBE] P: 7. Okay. S; 8. And take the nozzle-looking 9. piece, [P grabs NOZZLE] 10. no 11. I mean the clear plastic one, [P takes SPOUT] 12. and place it on the other hole [P identifies O ~ of MA1NTUBE] 13. that's left, 14. so that nozzle points away 15. from the [P installs SPOUT on OUTLET2 of MAINTUBE] 16. right. P: 17. Okay. S: 18. Now 19. take the 20. cap base thing [P takes TUBEBASE] 21. and screw it onto the bottom, [P sorewsTUBEBASE on)L~3NTUBE] 22, ooops, [S realizes she has forgotten to have P put SLIDL~ALVE into OUTLET2 of MAINTUBE] 23. un-undo the pies [P starts to take TUBEBASE off MAINTUBE] 24. no 25. the clear plastic thing that I 26. told you to put on [P removes SPOUT] 27. sorry. 28. And place the little red thing [P takes $LID~ALVZ] 29. tn there first, [P mserts SLXD~ALVZ into OUTLET~ of M[AINT~E] 30. it fits loosely in there. Excerpt 7 below demonstrates the latter type of focus confuszon that occurs when the speaker (S) sets up one focus - the M,4]NTUBE, which is the correct focus In this case - but then proceeds in such a manner that the listener (J) thinks a focus shift to another piece, the TUBESASE, has occurred. Thus, Line 15 refers to "the lower side hole in the M,41NTUBE" for S and "the hole in the TUBEBASE" for J. J has no way of realizing that he has focused incorrectly unless the description as he interprets it doesn't have a real world correlate (here something does satisfy the description so J doesn't sense any problem) or if, later in the exchange, a conflict arises 2O8 due to the mistake (e.g,, a requested action can not be performed). In Line 31, J inserts a piece into the wrong hole because of the misunderstanding in Line 15. Line 31 hints that J may have become suspicious that an ambiguity existed but since the task was successfully completed (i.e., the red piece fit into the hole in the base), and since S did not provide any clarification, he assumed he was correct. hcerpt 7 (Telephone) S: 1. Um now. 2. Now we're getting a little 3. more difficult. J: 4. (laughs) S: 5. Pick out the large air tube [l picks up SAND] 6. that has the plunger in it. [J puts down STAND. takes PLUNGER/MAINTUB~. assembly] J: 7. Okay. S: 8. And set it on ~ts base, [J puts down idAINTUBE, standing vertically, on the TABLE] 9. which is blue now, 10. rzght? [J has shifted focus to the TUBEBASE] J: 11. Yeah. $, 12. Base is blue. 13. Okay. 14. Now 15. You've got a bottom hole still 16. to be filled, 17. correct? J: 18. Yeah. [J answers this with MAINTUBE still sittint on the TABLE; he shows no indication of what hole he thinks i8 meant - the one on the MAINTUBE. OUTLET2, or the one in the TUBEBASE] [J S. picks 19. Okay. 20. You have one red piece 21. remamm8? up ldA/NTUBE assembly and looks at TUBEBASE, rotatine the MAINTUBE so that TUBP-BASE is pointed up, and sees the hole in it; he then looks at the SLIDEVALVE] J: 22. Yeah. 3. 23. Okay. 24. Take that red piece. [j takes SIJDEVALVE] 25. It's got four little feet on 26. it? J: 27. Yeah. S; 28. And put the small end into 29. that hole on the air tube-- 30. on the big tube. [J J; 31. On the very bottom? starts to put it into the bottom hole of TUBEBASE - though he indicates he is unsure of himself] S: 32. On the bottom, 33. Yes. Misfocus can also occur when the speaker inadvertently lefts to distinguish the proper focus because he did not notice a possible ambiguity; or when, through no fault of the speaker, the listener just fails to recognize a switch in focus indicated by the speaker. ~xcerpt 7 above is an example of the first type because S failed to notice that an amblguzty existed since he never explicitly brought the TUBEBASE either into or out of focus. He just assumed that J had the same perspective as hzm - a perspective in which uo ambiguity occurred. Wrong Context Context differs from focus. The context of a portion of a conversation is concerned with the po:nt of the discussion in that fragment and with the set of objects relevant to that discussion, though not attended to currently. Focus pertains to the elements which are currently being attended to in the context. For example, two people can share the same context but have different focus assignments wt~hm it - we're both talking about the water pump but you're describing the MA/NTUB£ and I'm descrlbmg the AIRCH,4MB£,q. Alternatively, we could JUst be uslng different contexts - I think you're talking about taking the pump apart but you're talking about replh^lng the pump with new parts - m both cases we m~v be sharing the same focus - the pump - but our conte~,s are totally off from one another. ~ The kinds of misunderstandings that can occur because of context problems are similar to those for focus problems: (1) the speaker might set up or be xn one context for a discussion and then proceed in another one without effectively letting the listener know of the change, (2) the listener may feel a change in context has taken place when in fact the speaker never Intended one, or (:3) the Listener fails to recognize an indicated context switch by the speaker. Context affects reference because it helps define the set of available oblects that are possible contenders for the referent of the speaker's descriptions. If the contexts of the speaker and listener differ, then m|sreference might result. Bad AnaloEy An analogy (see [I0] for • discusslon on analogies) is a useful way to help descrlbe an object by attemptlng to be more precise by using shared past expemence and knowledge - espec:ally shape and functional reformation. If that past experxence or knowledge doesn't contain the reformation the speaker assumes it does or isn't there, then trouble occurs. Thus. one more way referent confusion can occur Is by describing an oh}act using • poor analogy. An analogy used to describe an object might not be spec:fic 4Groez [14, lS] would dem~ril~ this as o difference in "task DIane J ~ile Rai¢ltlNnt [2e, 21] m~uld say that the "c0mlmmjcativa gCNlie" dJffare¢l. 2O9 enough - confusing the listener because several pieces might conform to the analogy or, tn fact, none at all appear to fit because discovering a mapping between the analogous object and some piece in the environment Is too difficult. In Excerpt 8, J at first has trouble correctly satisfying A's functional analogy "stopper" in "the bag blue stopper", but finally selects what he considers to be the closest match to "stopper". Excerpt 8 (Telephone) A: I. Okay. Now. 2. take the big blue 3. stopper that's laying around [J grabs ~diCI4AMBER] 4. .. and take the black 5, ring-- J: 6. The big blue stopper? [J is confused and tries to communicate it to A; he is holding the AIRCHAMBER here] A. 7 Yeah. 8. the blg blue stopper 9. and the black ring [J drops AIRCHAMBER and takes the O-RING and the TUBEBASE] In other cases tt might be too specific - confusing the listener because none of the available referents appear to fit it. In Line 8 of Excerpt 6, "nozzle-looking" forms a poor shape analogy because the object being referred to actually Is an elbow- shaped spout. The "nozzle-looklng" part of the description convinced the listener that what he was looking for was something specific like a nozzle (which xs a small spout). Sometimes, when an oblect xs a clear representative of a specified analogy class, the apprent2ce may become confused, wondering why the expert bothered to form an analogy mstead of just directly describing the object as a member of the class. Hence, tt would not be surprising d the apprentice tgnoreu the best representatnve of the class for some less obvious exemplar. Thus, for example, It ts better to say "nozzle" instead of "nozzle-looking." In Excerpt 9, the description "hippopotamus face shape" (a shape analogy) tn Lines 2 and 3, and "champagne top" (a shape analogy) in Line 9. ere too speclhc and the hstener ts unable to easily find something close enough to match either of them. He can't discover a mapping between the oblect in the analogy and one in the real world. Excerpt 9 (Audiotape) M; I. take the bright plnk flat 2. piece of hippopotamus face 3. shape piece of plastic 4. and you notice that the two 5. holes on xt [M is tr~tng to refer to BASEVALVE] 6. match 7. along with the two 8. peg holes on the 9. champagne top sort of 10. looking bottom that had II. threads on It [M is tryin E to refer to TUBEBASE] Description incompatibility Incompatible descriptions can lead to confusion also. A description is incompatible when (1) one or more of the specified conditions, i.e., the feature values, do not satisfy any of the pieces; (2) when one or more specified constraints do not hold (e.g.. saying "the loose one" when all objects are tightly attached). or (3) if no one object satisfies al_~l of the features specified in the description. In Lines 7 and 8 of Excerpt 9 above, M's use of "the two peg holes" leads to bewilderment for the listener because the described object has no holes in it. M actually meant "two pegs". 2.2.2 Detecting miscommunicatlon Part of our research has been to examine how a listener discovers the need for a repair of an utterance or a description during communication. The incompatibility of a referent or action is one signal of possible trouble. The appearance of an obstacle that blocks one from achieving a goal is another indication of a problem. Incompatibillty Two kinds of incompat~btltty, action or referent. appear In the taxonomy of confusions. The strongest hint that there is a reference problem occurs when the listener finds no real world object to correspond to the speaker's description. This can occur when (1) one or more of the specified feature values xn the description are not satisfied by any of the pieces (e.g. saying "the orange cap" when none of the objects are orange~. {2) when one or more specified constraints do not hold (e.g., saying "the red plug that fits loosely" when all the red plugs attach tightly), or (3) If no one object satisfies all of the features specified m the description (I.e., ther'e-ts, for each feature, an object that exhibits the specified feature value, but no one object exhibits all of the values). An action problem xs likely ~f I l) the listener cannot perform the action specified by the speaker because of some obstacle; (2) the hstener performs the action but does not arrlve at its intended effect (I.e., a specified or default constramt lsnt satisfied); or (3) the current action affects a previous action tn an adverse way, yet the speaker has given no sign of any importance to this side-effect. Goal obstacle A goal obstacle occurs when a goal (or subgoa[) one is trying to achieve ts blocked This blockage can result m confusion for the hstener because he did not expect the speaker to give him tasks that could not be achieved. Often. though, it points out for the hstener that some mlscommunication (such as mlsreference) has occurred. Goal redundancy Goal redundancy occurs when the requested goal (or subgoal) is already satisfied. In some sense, xt xs a special klnd of goal obstacle where the goal to be fulfilled is blocked because it is already satisfied. It is a simple goal obstacle because nothmg has to be done to get around it. However, it can lead to confusion on 210 the part of listeners because they may suspect they misunderstood what the speaker has requested since they wouldn't expect a reasonable speaker Lo request the performance of an already completed action. It provides a hint that miscommumcation has occurred. 3 Repairing Reference Failures 3. I Introduction The previous section dlustrated how task- oriented natural language mteractlons in the real world can induce contextually poor utterances. Given all the possibilities for confusion, when confusions do occur, they must be resolved If the task is to be performed. This section explores the problem of fixing reference failures. Reference Identification is a search process where a listener looks for something in the world that satisfies a speaker's uttered description. A computatlonal scheme for performing reference has evolved from work by other artificial intelligence researchers (e.g., see [14]). That tradltlonal approach succeeds if a referent ~s found, or falls d no referent ts found {see Figure 3(a)). However, a reference identlficatlon component must be more versatile than those constructed m the traditional manner. The excerpts provided m the prevlous section show that the traditional approach is wrong because people's real behavlor zs much more elaborate. In particular. hsteners often find the correct referent even when the speaker's descrlpt)on does not describe any object In the world. For example, a speaker could descrlbe a blue block as the "turquoise block." Most listeners would go ahead and assume that the blue block was the one the speaker meant. A key feature to reference identlficatlon is "negotlatlon." Negotlatlon in reference ldentlhcatlon comes in two forms. First. It can occur between the listener and the speaker. The listener can step back, expand greatly on the speaker's descrlptlon of a plausible referent, and ask for conhrmatlon that he has indeed found the correct referent. For example, a hstener could mltlate negotiation wlth 'Tin confused. Are you talking about the thlng that is klnd of flared at the top? Couple inches long. It's kind of blue." Second. negotiation can be wlth oneself. Thls type of negotiation, called self-negotlatlon. Ls the one that we are most concerned wlth in thls research. The listener conslders aspects of the speaker's descrzptlon, the context of the commumcatlon, and the listener's own abdltles. He then apphes that dehberatlon to determine whether one referent candldate :s better than another or. if no candidate Is found, what are the most likely places for error or confuslon. Such negotlatlon can result in the listener testing whether or not a partlcular referent works. For example, linguistic descrlptlons can influence a listener's perception of the world. The listener must ask himself whether he can percelve one of the oblects in the world the way the speaker described it. in some cases, the listener's perceptlon may overrule the descrlptlon because the listener can't percelve ~t the way the speaker described it. To repair the traditional approach we have developed an algorithm that captures for certain cases the listener's abdity to negotiate with himself for a referent It can look for a referent and. If It doesn't find one, it can try to find possible referent candidates that might work, and then loosen the speaker's description using knowledge about the speaker, the conversation, and the listener himself. Thus. the reference process becomes multi-step and resumable This computational model, which I call "FWIM" for "Find What I Mean", is more faithful to the data than the traditional model (see Figure 3(b)). Current I_ ~ RefePence ~u....= Component ~mi~=t Current Reference -~ ~,,=¢..= Component ~ ~J~milure Relaxation 1 Component T¢.-,,- u (a) Traditional (b) FWIM Figure 3: Approaches to reference ]dentdlcatlon One means of making sense of an approxlmate description is to delete or replace portlons of it that don't match objects In the heater's world. [n our program we are uslng "relaxation" techniques to capture this behavior. Our reference identlhcatlon module treats descriptions as approximate It relaxes a description in order to find a referent when the hteral content of the description falls to provide the needed Information. Relaxation. however, is not performed blindly on the description We try to model a person's behavior by drawlng on sources of knowledge used by people. We have developed a computational model that can relax aspects of a descrlptlon using many of these sources of knowledge. Relaxation then becomes a form of commumcatlon repair [4] that hearers can use. 3.2 The relaxation component When a description fails to denote a referent In the real world properly, It Is possible to repair tt by a relaxatlon process that ignores or modifies parts of the descrlptlon. Since a description can speclfy many features of an object, the order In which parts of It are relaxed Is crucial (i.e.. relaxing Ln different orders could yield matches to different objects) There are several kinds of relaxation possible One can ignore a constituent, replace It with something close, replace it with a related value, or change focus (i.e.. consider a different group of objects.). This section descrlbes the overall relaxatlon component that draws on knowledge sources about descriptions and the real world as it tries to relax an errorful description to one for which a referent can be sdentlfied. 3.2.1 Find a referent using a reference mechamsm Identifying the referent of a description requires finding an element in the world that corresponds to the speaker's description (where every feature specified in the description is present In the element in the world but not necessarily vice versa). The initial task of our reference mechanism Is to determine whether or not a search of the (taxonomic) knowledge base that we use to model the world Is necessary. For example, the reference component should not bother searching - unless specifically requested to do so - for a referent for indefinite noun phrases (which usually describe new or hypothetical objects) or extremely vague descriptions (which do not clearly describe an oblect because they are composed of imprecise feature values). A number of aspects of discourse pragmattcs can be used in that determination (eg., the use of a delctlc In a definite noun phrase, such as "thls X" or "the last X", hints that the object was either mentioned previously or that it probably was evoked by some previous reference, and that it is searchable) but we will not examine them here. The knowledge base contains linguistic descriptions and a descrlptton of the listener's vlsual scene itself. In our Implementation and algorithms, we assume It is represented In KL-One [3], a system for describing taxonomic knowledge. KL-One is composed of CONCEPTs, ROLEs on concepts, end links between them. A CONCEPT Is like a set. representing those elements described by it. A SUPERC link ('==>") is used between concepts to show set Inclusion. For example, consider Figure 3. The SuperC from Concept B to Concept A is like stating BCA for two sets A and B An INDIVIDUAL CONCEPT ts used to guarantee that the subset speclhed by a concept Is unique The [ndlvldual Concept D shown m the figure Is dehned to be a unique member of the subset specified by Concept C ROLEs on concepts are like normal attributes and slot hllers m other knowledge representation languages. They define a functlonal relatlonshlp between the concept and other concepts Concept C Individual Concept Figure 4: A KL-One Taxonomy Assuming that a search of the knowledge base Is considered necessary, then a reference search mechanism ts revoked. The search mechanism uses the KL-One Classther [16] to search the knowledge base taxonomy. Thls search Is constrained by a focus mechanlsm based on the one developed by Grosz [14]. The Classafler's purpose Is to discover all approprmte ~ubsumptlon relationships between a newly formed descrlptton and all other descriptions In a gwen taxonomy. With respect to reference, this means that all possible (descriptions of) referents of the descrlptlon will be subsumed by tt after It has been classLhed rote the knowledge base taxonomy. If more than one candidate referent Is below (when a descrlptlon A Is subsumed by B. we say A ts "below" B) the classified description, then, unless a quantifier in the description specified more than one element, the speaker's description is ambiguous. If exactly one descr~ptlon Is below it, then the intended referent is assumed to have been found. Finally, if no referent is found below the classified descrxption, the relaxation component is invoked. We will only consider the last case in the rest of the paper. 3.2.2 Collect votes for or against relaxing the description It is necessary to determine whether or not the lack of a referent for a description has to do with the description itself (i.e.. reference failure) or outside forces that are causing reference confusion. For example, the problem may be with the flow of the conversation and the speaker's and hsteners perspectives on it; it may be due to mcorrect attachment of a modifier; it may be due to the action requested; and so on. Pragmatic rules are Invoked to decide whether or not the descrxptlon should be relaxed. These rules will not be discussed here so we will assume that the problem lies in the speakers description. 3.2.3 Perform the relaxation of the description If relaxation Is demanded, then the system must (1) find potential referent candidates, (2l determine which features in the speaker's description to relax and in what order, and use those ordered features to order the potential candidates with respect to the preferred ordering of features, and (3~ determine the proper relaxation techniques to use and apply them to the description. Find potential referent candidates Before relaxation can take place, potential candidates for referents (which denote elements in the listener's visual scene) must first be found These candidates are discovered by performing a "walk" tn the knowledge base taxonomy in the general vlclmty of the speakers classified description. A KL-One partial marcher is used to determme how close the candidate descriptions found during the walk are to the speakers description, The partial metcher generates a numerical score to represent how well the descrlptlons match (after first generating scores at the feature level to help determme how the features are to be aligned end how well they match). This score is based on information about KL-One and does not take mto account any information about the task domain. The ordering of features and candidates for relaxation described below takes Into account the task domain. The set of best descriptions returned by the marcher (as determined by some cutoff score) are selected as referent candidates. Order the features and candidates for relaxation At this peat the reference system inspects the speaker's description and the candidates, decides wtltch features to relax and in what order. 5 and generates a master ordering of features for relaxation. Once the feature order Is created, the reference system uses 50f course, om=a one ~rticular candidate is selected. then deciding which features to relax is relatively tr(vial - one simply c(mporee feature by feature between the candidate description (the target) and the speaker's description (the ~ttern) and notes any discrepancies. 212 that ordering to determine the order in which to try relaxing the candidates. We draw pr;martly on sources of linguistic knowledge, pragmatic knowledge, discourse knowledge, domam knowledge, perceptual knowledge, hierarchical knowledge, and trial and error knowledge durmg this repair process. A detailed treatment of all of them can be found In [12, 27, 13]. These knowledge sources are consulted to determine the feature ordering for relaxation. We represent information from each knowledge source as a set of relaxation rules. These rules are written in a PROLOG-Iike language. Figure 5 illustrates one such linguistic knowledge relaxation rule. This rule is motivated by the observation in the excerpts that speakers typ~cally add more important informatlon at the end of a descrlpt~on (where they are separated from the ma~n part of the descrlpt~on and thus provided more emphasis). Since the syntactic constituents often at the end are relatlve clauses or predicate complements, we created this more specdic relaxatlon rule. However. a more general and more applicable rule is that information presented at the end of a descrlptlon is usually more promment. Relox the features in the speaker's description in the order: odjectives, then I:repoeitiono! phroeee, ond finolly relctive ¢louses ond prediccte complements. E.g.. Rel ox-Feot ure-Be f ore(v 1 .v2) <- ObjectOeecr(d), Feat ureOeec r i ptor(v! ), FectureOescr iptor(v2), FecturelnOeecr i pt ion(vf .d). Feat urel nOesc r i pt i on(v2 .d). 5"quo I (syntoc t ic-f orm(v t .d), "ADJ"). ;'quo I (synt a¢t ic-f orm(v2.d), "REL-CLS") Figure 5: A sample relaxation rule Each knowledge source produces ~ts own partial ordermg of features. The partial ordermgs are then zntegrated to form a d~rected graph. For example. perceptual knowledge may say to relax color However. ~f the color value was asserted ~n a relative clause. linguistic knowledge would rank color lower. ~.e.. placmg ~t later ~n the list of things to relax. Smce different knowledge sources generally have different partial orderlngs of features, these differences can lead to a conflict over which features to relax. It Is the job of the best candidate algorithm to resolve the d~sagreements among knowledge sources. It's goal ts to order the referent candidates, Ci, so that relaxation ~s attempted on the best candzdates first Those candidates are the ones that conform best to a proposed feature ordering. To start, the algorithm exammes pairs of candidates and the feature order~ngs from each knowledge source. For each candidate C i. the algorithm scores the effect of relaxlng the speaker's orlglnal descrlpt~on to C i. using the feature ordering from one knowledge source. The score reflects the goal of mln~mlz:ng the number of features relaxed whale try3ng to relax the features that are "earhest" sn the feature ordermg. It repeats ~ts scoring of C i for each knowledge source, and sums up its scores to form Ci's total score. The Ci's are then ordered by that score. Figure 6 provides a graphic description of th~s process. A set of objects ~n the real world are selected by the partial marcher as potent~a| candidates for the referent. These candidates are shown across the top of the figure. The lines on the right side of each box correspond to the set of features that describe that object. The speaker's descrlpt~on ts represented in the center of the figure. The set of specified features and their assigned feature value (e.g., the pair Color-Maroon) are also shown there. A set of partial orderings are generated that suggest which features in the speaker's description should be relaxed first - one ordering for each knowledge source (shown as "l~nguist~c," "Perceptual." and "H~erarchlcaI" in the figure). These are put together to form a directed graph that represents the possible, reasonable ways to relax the features specified tn the speakers description. Finally. the referent candidates are reordered using the information expressed ~n the speaker's description and in the directed graph of features. OQ/ecrl • *a pm-c~al FI -~ ¢o1¢*- f~ tl oe fz P~ ¢ -) N|eeet.¢tnlceJ f3 -) F~I:I~ f2 F3 fZ oe f~ oe F,* F4 -) Size f3 fa f4 5 O~Nct4d Ct~ of/~rtu.s I~ ,*~,~r~;~ Figure 8: Reordering referent candldates Once a set of ordered, potential candldates are selected, the relaxation mechanlsm begms step 3 of relaxatlon; it trles to find proper relaxation methods to relax the features that have lust been ordered ~success tn flndlng such methods "justifies" relaxing the descrlptlon). It stops at the first candidate which zs reasonable. Determine which relaxation methods to apply Relaxation can take place wlth many aspects of a speaker's descrlptlon: wlth complex relatlons specified In the descrlptlon, wlth indlvldual features of a referent specified by the descrlptlon, and with the focus of attention in the real world where one attempts to find a match. Complex relatlons speclfted in a speaker's descrlptlon include spatlal relations (e.g.. "the outlet near the top of the tube">, comparatives (e.g. "the larger tube") and superlatlves (e.g., "the longest tube"). These can be relaxed. The slmpler features of an object (such as slze or color) that are speclfied in the speaker's descrlptton are also open to relaxation. Often the objects in focus In the real world implicitly cause other objects to be In focus [14, 2{]]. The subparts of an object ~n focus, for example, are reasonable candidates for the referent of a fazhng description and should be checked. At other times, the speaker might attribute features of a subpart of an 213 object to the whole object (e.g., describing a plunger that Is composed of a red handle, a metal rod, a blue cap, and a green cup as "the green plunger"). In these cases, the relaxation mechanism utilizes the part-whole relation in object descriptions to suggest a way to relax the speaker's description. Relaxation of a description has a few global strategies that can be followed for each part of the description: (I) drop the errorful feature value from the description altogether, (2) weaken or tighten the feature value but keep its new value close to the specified one, or (S) try some other feature value. These strategies are realized through a set of procedures (or reLa=,,tion methods) that are organized hierarchically. Each procedure is an expert at relaxing its particular type of feature. For example, a Generat e- Similar- Feature-Values procedure is composed of procedures llke Generate-Similar-Shape- Values, Generate-Similar-Color-Values and Generate- Similar-Size-Values. Each of those procedures are specialists that attempt to first relax the feature value to one "near" the current one (e.g., one would prefer to first relax the color "red" to "pink" before relaxing it to "blue") and then. d that fails, to try relaxing it to any of the other possible values. If those fail. the feature would simply be ignored. 3.3 An example on handling a misreference This section describes how a referent identification system can handle a mlsreference using the scheme outlined in the previous section. For the purposes of thls example, assume that the water pump objects currently in focus include the CAP. the MAINTUBE. the AIRCHAMBER and the STAND (see Figure l{a) for a picture of these parts) Assume also that the speaker tries to describe two of the objects. ". two devices that are clear piastlc One of them has two openings on the outside with threads on the end, and its about five inches long. The other one tsa rounded piece with a turquoise base on it. Both are tubular. The rounded piece fits loosely over...". The reference system can find a unique referent for the first obJect but not for the second. The relaxation algorithm will be shown below to reduce the set of referent candidates for the second description down to two. It. then. requires the system/listener to try out those candidates to determine if one. or both, fits loosely. The protocols exhibit a similar result when the listener uses "fits loosely" to get the correct referent (eg.. Excerpt 6 exemplifies where the "fit" can confirm that the proper referent was found). Figure 7 provides a simplified and hnearlzed vlew of the actual KL-One representatlon of the speaker's descriptions after they have been parsed and semantically interpreted. A representation of each of the water pump objects that are currently under consideration is presented in Figure 8 Each provides a physical description of the object - in terms of its dimensions, the basic 3-D shapes composing it, and its physical features - and a basic functional description of the obJect. The first entry in each representation tn Figure 8 {that entry is shown in uppercase) defines the basic kind of entity being described {e.g.. "TUBE" means that the object being described is some kind of tube) The words in mixed case refer to the names of features and the words in uppercase refer to possible fillers of those features from things in the water pump world. The "Subpart" feature provides a place for an embedded description of an object that is a subpart of a parent object. Such subparts can be referred to on their own or as part of the parent object. The "Orientation" feature, used in the representations in Figure 8. provides a rotation and translation of the object from some standard orientation to the oblects current orientation in 3-D space. The standard orientation provides a way to define relative positions such as "top," "bottom," or "slde." Dlaerl: I DCVIC~ ITrensperency C~ARI *Composition PL.ASTIC) iSuSpert I(~PL~NilIi ISUDpiTI 10111)) ISulleirt {T~ADS Ilii-Politlon [NDIt) IDlm4,nilonl I~nlth S.OII ilAiloli©il-~hlll T$111Jlilll (FIT-INTO IOuter (D[VIC/ (Trlnsperencv C'~TARI ;Compos,t ,on PLASTIC) Shape R~O ) ( Ail@lo| I ¢li -.~hllpe 'r~.~ljt~R ) (Subpirl ~SA. ~¢ q('olor T'~T~QtrOl.~T);Ib) ¢ Inner ) ir,tCond|t~on LOOSEI ) Figure 7: The speaker's descriptions The first step in the reference process ts the actual search for a referent in the knowiedge base The reference identification process is incremental in nature, l.e,, the listener c~n begin the search process before he hears the complete description This was observed throughout the videotape excerpts and the algorithm presented here is actually deslgned to be incremental. The KL-One Classifier compares the features specified in the speaker's descriptions (Descrl and the" "Outer" feature of Descr2 in Figure T) with the features speclhed for each element in the EL-One taxonomy that corresponds to one of the current objects of interest in the real world. Notice that some features are directly comparable. For example, the "Transparency" feature of Descrl and the "Transparency" feature of MAINTUBE are both equal to "CLEAR." Other features require further processing before they can be compared. The OPENING value of "Subpart" in Deecrl is thought of primarily as a 2-D cross-sectlon (such as a "hole"). while two CYLINDER subparts of MA/NTUBE are viewed as 13-D) cylinders that have the "Function" of being outlets, i.e., OUTLET- ATTACHMENT-POINTS. To compare OPENING and CYLINDER, the inference must be made that both things can describe the same thing (similar inferences are developed in (18]). One way this inference can occur ts by recurslvely examining the subparts of MA/NTUBE wlth the partial matcher until the cylinders are examined at the 2-D level. At that level, an end of the cylinder will be defined as an OPENING With that examination, the MAINTUBE can be seen as described by Deeer I. Descr2 presents different problems. Descr2 refers to an obJect that is supposed to have a subpart that is TURQUOISE. The Classifier determines that Descr2 could not describe either the CAP or 3TAND because both are BLUE. It also could not describe the MAINTUBE 6 or .aIR CHAMBER since each has subparts that are either VIOLET or BLUE. The Classifier places Descr2 as best it can in the taxonomy, showing no connections between 6Sin©e Deacr~ refers to ~AJNTUSE. MAINTUBE could be dropped as • potential referent candidate for Descr2. We will. hommver, leave it ae a potential candidate to make this eso~le more coelples. 214 (C~mpol, t ion PI,~I~T I C I C&P ( Tronzpiren¢? ~li~¢.~ ) I~*inlionl il,,llllltl .~l IDill#r .Sit i(llte~iit~on llotel~o~l iO.O 0.0 tO.O~ i Illinilit~on ~(l*O 0,0 O.Oli)l ITU~ ilA I I~ ors*or vlolrT* {C==pOIItl~ PLA~'r|C) (Transparency CI.[ASl (Otllll~llO~l I~tnlth 4.|~1) (SuPport ICYLIND~R IO:ll~nSlOnS #Lenlt~ .~51 (Oil d I trier I.{ZS)) tOrlent@tlO8 *Rotillo8 10.0 0.0 0.0)1 LlS iTrenslltton *0.0 0.0 3.?5)i) (F~n¢tlon OL~IrT-aTTAC)~NT-~OINT)I} (Subpert ICYLINO£R IOllmnsloss ILen|th 3.~l ~Olsmeter I.OI) T~&e~o~ (Orsentltlon /Rotation IO*O 0.0 o.oli (Trsnslet~o~ (0.0 0.0 .25ill)) ($ubssrt (CYLINO~S (Oll~ll~l I~@nlth .~Si IOl~l~ter 1*12Si) (Orientation IRotlt|on (O.O O.O O*Oll ~ff@l14~ ITrlfllliilo, IO.O 0.00*O))i (FUnctIOB '~AJDE~-A~'ACHId~NT-POINT))) (Subpirt (CY~IN~I[R (Ol=~llO~l ([.enltfl .3751 {OlllLl~ter .5)) (Orlentlt:on *Rotation (O.O O.O 90o0}) Ot~lle#~ (Tr@nsietlon (O.O ,~ 3.00))) (F~tnctlon O~'T~T-AlrTAC~4~,NT-POINY})i (Subpsrt (CYLiNOER (OllMnllOnl i~enlth ,375) IOlll/wter *S)) (Orientation IRotst=on (O.O O.O 90.O)) ~Atllel2 ITrsnliotlon (O.O ,S .82~)) {runctlo~ OIJ~frT*ATTACiSlS~NT-POINTI))i ~IR CH~R (C~lTAIlltll (Dimens,on| ¢t.I[~GTH 2.75)) (Ccm@o=ttion PLASTICI (SuPport t H~.MISPH~ *Color VIOt.I~?) (Transparency CLI~XRI C~4ml~@f lOllll~flllOfll IOllmeler l.Oll To s (O@lehtl¢lon ISolation ¢0.0 0.00.Ol) I~rinliitlo~ 10.0 0.0 2.~ll))) (Subpirt (CY~INOER lColor VIOLET} (Trlnsperenov CLEAR! Chamoe~ (D~menllons ¢Len&th 1.01 (O~wter 2.25}1 Bo@y (Orientation (Sot~tlO~ iO.O 0.0 0.01) (~rlnliOtlOn I0.0 0.0 .375)*))) (Sub~ert {C3"LINOER (Color SLU~i (Trlnspereney OPaQUe) (Ollenslonl (Lcnlth .3Y51 (nt~ds~ler 1.25() tOrlen¢lllo, (Rotation IO*O O.O 0.0)) Chcm~eP (Translation (O*O 0,0 0.0))) Bore=s= (runct*an CAP OUTLET-A~&CHM~J~-~)(NTI {~Dp~rt ;CYLINDER 4Color 8t.UE) IOl~nllOnl iLensth-*3TS) (Orlentatloa {Rogation (O*O O.O 0.0)) (Translation ~O.O o,0 0.0))) (F~tctlon OU'rtrT.al"~'~C)Od~lT-I=~lNT))))) {Sub,art ¢CY~{h'O/R (Color VIOLET) (Teonl~loen¢~ CLE,/I~I C~a~oft (Dimensions (Len¢(h .51 lOlmaet~r .37~ll Otl~[rf (Orientation (Rotit*on ,0.0 0.0 ~0.0)) (Terns(iLion (.625 .825 .625})) (~ctlon OUTL~T-~TTAC;~q~NT-P~)INT)lt) :T~[ ,D,mens,ons ¢LenCtn 2.7~)( 4Compos,tton PLASTIC) (5uupsrt 1CYLINDER fColor BLUE1 {Trsn|parehcy CLEARt r0p (D*mens;ons ¢LenGth 2.25) 40j~dlte(er .3TSl) (ortenta¢ion tRoLit*oh ¢0.0 0.0 0.0)) ~TA;ND (Trenltscson (.$ O,0 .375})~ (Funct,on (X~L£T-&TTAC~.NT-POINTI)) tlub=*rt ;C~*LINO£R ¢CoIor BLUI~) (TrlnspereneF CL£AR) 8@s@ I~l~eflslonl I~.~rneth .3~5) (Ollulet@@ l. OI) IOelcntet¢ofl iRotmllon {0.0 0.0 0.0)) ~Teensiiliofl fO*O 0.0 0.0))) IF~not.on OUTLLT-*rI"AC)O4E~T-POINTt))I Figure 8: The objects in focus it and any of the objects currently in focus. At this point, a probable misreference is noted. The reference mechanism now tries to find potential referent candidates, using the taxonomy exploration routine described in Section 3.2.3. by examining the elements closest to Descr2 In the taxonomy and using the partial matcher to score how close each element is to Descr2. 7 The matcher determines MA/NTUBE. STAND, and AIR 7The partial mctcher scores are numerical scores computed from 0 set of role scores that indicate how well each feature of the two descriptions match Thosa feature scores are represented OS a scale: HIGH[ST |+|, {> <(, {-(. {?l, {-( COWEST. CHAMBER as reasonable candidates comparing their features to Descr2. Scoring Descr2 to MAINTUBE. by aligning and o a TUBE is a kind of DEVICE: (>) o the Transparency of each is CLEAR; (-'{ o the Composltlon of each Is pLASTIC. (~-) o a TUBE Implies Analoglcal-Shape TUBULAR. which implies Shape CYLINDRICAL, which ~s a kind of Shape ROUND: (>) o the recurslve partlal matching of subparts: A BASE Is viewed as a kind of BOTTOM. Therefore, BASE In Descr2 could match to the subpart In MA/NTUBE that has a Translation of (0.0 0,0 0.0) - I.e., Threads of MAINTUBE. However, they mismatch since color TURQUOISE In Descr ~- differs from color VIOLET of MAINTUBE. (-) Scoring Descr2 to STAND: o a TUBE Is a kind of DEVICE, (>) o the Transparency of each is CLEAR. (-, o the Composltlon of each is PLASTIC. (-) o a TUBE *mphes Analoglcal-Shape TUBULAR. whlch imphes Shape CYLINDRICAL. which Ls a kind of Shape ROUND; (>) o the recurswe partial matching of zubparts. BASE in Descr2 could match to the subpart (n STAND that has a Translation of 10.0 00 0.0) - I.e.. Base of STAND. However. they mismatch since color TURQUOISE m Descr2 differs from color BLUE of STAND (-) Scoring Descr2 to AIR CHAMBER: o a CONTAINER Is a kind of DEVICE. (>) o the Transparency of Descr2. CLEAR. matches the Transparency of ChamberTop. ChamberOutlet and ChamberBody of AIR CHAMBER but mismatches the Transparency of ChamberBottom of AIR CHAMBER. Therefore. the partial match is uncertain. (?) o the Composltlon of each is PLASTIC, (+) o the subparts of AIR CHAMBER have Shape HEMISPHERICAL and CYLINDRICAL which are each a klnd of Shape ROUND: {>) o the recurstve partial matching of subparts. BASE m Desor2 could match to the subpart in AIR CHAMBER that has a translation of (0.0 0.0 0.0) - i.e., ChamberBottom o( .4IR CHAMBER. However. they mismatch since color TURQUOISE m Deacr2 differs from color BLUE of AIR CHAMBER {-) The above analysls using the partial matcher provldes no clear winner smce the differences are so close causing the scores generated for the candldates to be almost exactly the same (i.e.. the only difference was In the score for Transparency). All candidates. hence, will be retained for now. 215 At this point, the knowledge sources and their associated rules that were mentloned earlier apply. These rules attempt to order the feature values m the speaker's description for relaxation. First. we'll order the features m DescrZ using lingulstlc knowledge. Linguistic analysls of Deser2, "... are clear plastlc ... a rounded pace wlth a turquoise base ... Both are tubular ... fits loosely over .... " tells us that the features were specified using the following modifiers. o Adlect~ve: (Shape ROUND) o Prepositional Phrase: (Subpart (BASE (Color TURQUOISE))) o Predicate Complement: (Transparency CLEAR), IComposltion PLASTIC), (Analoglcal-Shape TUBULAR), (Fit LOOSE) Observations from the protocols (as described by the rules developed In [13]) has shown that people tend to relax first features specified as adlectlves, then as preposltlonal phrases and finally as relative clauses or predicate complements. Thls suggests relaxation of Descr2 in the order: ]Shape} < |Color.Subport| < |Tronsporency.COmpOSi t ;on.Analogical-ShaDe ,F; t | The set of features on the left side of a "<" symbol is relaxed before the set on the rlght side The order that the features inside the braces. ")~", are relaxed is left unspecified {i.e., any order of relaxation Is alrlght) Perceptual Information about the domain also provldes suggestlons. Whenever a feature has feature values that are close, then one should be prepared to relax any of them to any of the others (we call thls the "clustered feature value rule") [n thls example. smce the colors are all very close - BLUE. TURQUOISE, and VIOLET - then Color may be a reasonable thing to relax. Hxerarchlcal Information about how closely related one feature value Is to another can also be used to determine what to relax. The Shape values are a good example. A CYLINDRICAL shape Is also a CONICAL shape, which Is also a 3-D ROUND shape. Hence. It Is very reasonable to match ROUNDED to CYLINDRICAL. All of these suggestions can be put together to form the order: ~Sho~e.Co~or| < ~Su~l)art~ < |Trangporeflcy ,Compos i t i on. Ana| og i ca I--Shope. F i I: |. The referent candldates MAINTUBE. STAND, and .41R CHAMBER can be examined and possibly ordered for relaxation using the above feature ordering For this example, the relaxation of Descr2 to any of the candidates requires relaxing their SHAPE and COLOR features. Since they each require reiaxmg the same features, the candidates can not be ordered w, th respect to each other (i.e., none of the possible feature orders is better for relaxing the candidates). Hence. no one candidate stands out as the most likely referent. While no orderlng of the candidates was posslble. the order generated to relax the features In the speaker's description can be used to guide the relaxation, of each candldate. The relaxation methods mentioned at the end of the last section come Into use here. Generate-Simdar-Shape-Values can determine that HEMISPHERICAL and CYLINDRICAL shapes of the AIR CHAMBER are close to the 3D-ROUND shape.. This holds equally true for the cyhndrlcal shapes of the MAINTUBE and the STAND. Generate-Similar-Color- Values next trms relaxing the Color TURQUOISE. It determmes the colors BLUE and GREEN as the best alternates. Here only two clear winners exist - the AIR CHAMBER and the STAND - while the MAINTUBE is dropped as a candidate smce it Is reasonable to relax TURQUOISE to BLUE or to GREEN but not to VIOLET Subpart, Transparency, Analoglcal- Shape, and Composition provide no further help {though. the fact that the AIR CHAMBER has both CLEAR and OPAQUE subparts mght put it slightly lower than the ST,hVD whose subparts are all CLEAR. Thls difference. however, is not slgndicant.). Thls leaves trial and error attempts to try to complete the FIT action. The one (if any) that fits - and fits loosely - Is selected as the referent. The protocols showed that people often do just that - reducing their set of choices down as best they can and then taking each of the remalnmg chmces and trying out the requested action on them 4 Conclusion Our goal m thls work Is to budd robust natural language understanding systems, allowmg them to detect and avold mlscommunlcatlon. The goal is not to make a perfect listener but a more tolerant one that could avold many mistakes, though still wrong on occasion. In Section 2, we mtroduced a taxonomy of mlscommunlcatlon problems that occur tn expert - apprentice dialogues. We showed that reference mistakes are one kind of obstacle to robust communication. To tackle reference problems, we descrlbed how to extend the succeed/fad paradigm followed by previous natural language researchers We represented real world objects hlerarchlcallv in a knowledge base using a representation language, KL-One. that follows in the tradition of semantlc networks and frames. In such a representatlon framework, the reference identification task looks for a referent by comparing the representation of the speakers Input to elements in the knowledge base by using a matching procedure. Failure to find a referent in previous reference identlhcatlon systems resulted In the unsuccessful termination of the reference task We claim that people behave better than this and exphcltly illustrated such cases in an expert-apprentlce domain about toy water pumps. We developed a theory of relaxation for recovering from reference failures that provides a much better model for human performance. When people are asked to identify objects, they go about it m a certain way. flnd candidates, adjust as necessary, re-try, and, if necessary, glve up and ask for help. We claim that relaxation is an Integral part of this process and that the particular parameters of relaxation differ from task to task and person to person. Our work models the relaxation process and provldes a computatlonal model for experimenting w~th the different parameters. The theory incorporates the same language and physical knowledge that people use m performing reference identification to guide the relaxation process. Thls knowledge Is represented as a set of rules and as data m a hierarchical knowledge base. Rule-based relaxation provided a methodical way to use knowledge about language and the world to find a referent. The hlererchxcal representation made It posslble to tackle issues of Impreclslon and over- specification In a speakers description. It allows one to check the position of a description in the hierarchy and to use that position to fudge Imprecision and over-speclfication and to suggest possible repairs to the descriptlon. 216 Interestingly. one would expect that "closest" match would suffice to solve the problem of finding a referent. We showed, however, that it doesn't usually provide you with the correct referent. Closest match isn't sufficient because there are many features associated wlth an object and, thus. determimng whlch of those features to keep and which to drop Is a difficult problem due to the combinatorlcs and the effects of context. The relaxation method described circumvents the problem by using the knowledge that people have about language and the physical world to prune down the search space. ACKNOWLEDGEMENTS I want to thank especially Candy Sidner for her inslghtful comments and suggestions during the course of thls work. I'd also like to acknowledge the helpful comments of George Hadden, Diane L~tman, Marc Vilam, Dave Waltz, Bonme Webber and Bill Woods on this paper. Many thanks also to Phil Cohen, Scott Fertig and Kathy Starr for providing me wlth thelr water pump dmlogues and for their invaluable observations on them. REFERENCES [I] Allen. James F. A Plan-Based Approach to Speech Act Recognztion. Ph.D. Th.. University of Toronto. 1979. [2] Appelt, Douglas E. Planning AIat~Tal Language Utterances to Satisf!/ Multiple Goals. Ph.D Th., Stanford Unlverslty, 1981. [3] Brachman, Ronald J. A 3tr~ctural Paradigm /or Representing Knowledge. Ph.D. Th., Harvard Umverslty, 1977 Also, Technlcal Report No. 3605. Bolt Beranek and Newman Inc. [4] Brown, John Seely and Kurt VanLehn. "Repair Theory A Generative Theory of Bugs m Procedural Sk111s." Cognitive Science ~, 4 (1980), 379-426 [5] Cohen. Philip R On Kno~vlng What to Sa?l. Planning Speech Acts. Ph.D. Th., University of Toronto, 1978. [8] Cohen. P.. C Perrault and J. Allen. Beyond Question Answering. In KnowLedge Representation and Natural Language Processing. W. Lehnart and M. Ringle, Ed..Lawrence Erlbaum Associates, 1981. [7] Cohen. Phlhp R. The need for Referent Identlficatlon as a Planned Actlon. Proceedings of IJCAI-81. Vancouver. B.C., Canada, August. 1981, pp. 31-35. [8] Cohen, Phlhp R, Scott Fertlg and Kathy Start. Dependencies of Discourse Structure on the Modahty of Communlcatlon. Telephone vs. Teletype. Proceedings of ACL, Toronto. Ont., Canada, June, 1982, pp. 28-35. [9] Cohen, Philip R. "The Pragmatlcs of Referring and the Modahty of Communlcatlon." Computational Linguistics 10, 2 (April-June 1984). 97-146. [10] Gentner. Dedre. The Structure of Analogical Models In Science. Bolt Beranek and Newman Inc.. July, 1980. [11] Goodman. Bradley A. Mlscommunlcatlon an Task- Oriented Dialogues. KRNL Group Working Paper, Bolt BeraneK and Newman Inc., April 1982. [12] Goodman, Bradley A. Repairing Miscommunlcatlon: Relaxation m Reference. Proceedings of AAAI-83. Washlngton. b.C.. August, 1983, pp. 134--138. [13] Goodman, Bradley A. Communication and Miscomccr,~n~cation. Ph.D. Th., University of Illinols. Urbane, 1984. [14] Gross, Barbara J. The Representation and Use of Focus in Dialogue Under'standing. Ph.D. Th., University of Californla, Berkeley. 1977. Also, Technical Note 151. Stanford Research Instltute. [15] Gross. Barbara J. Focusing and descriptions in natural language dialogues. In Elements of Discourse Understanding, Joshi. Webber and Sags, Ed.,Cambrldge University Press, 1981, pp. 84-ID5. [16] Lipkis, Thomas. A ](L-ONE Classifier. Proceedings of the 1981 KL-One Workshop, June, 1982, pp. 128-145. Report No. 4842, Bolt Beranek and Newman Inc. Also Consul Note # 5, USC/Information Sciences Institute. October 1981. [17] L4tman, Diane J. and James F. Allen. A Plan Recogmtion Model for Clarlfication Subdialogues. Proceedings of Coling84, Stanford Umverslty, Stanford, CA., July, 1984, pp. 302-311. [18] Mark. William. Realization. Proceedings of the 1981 ](L-One Workshop, June, 1982, pp. 78-89. Report No. 4842, Bolt Beranek and Newman Inc. [19] McKeown, ](athleen R. Recurslon In Text and Its Use in Language Generation. Proceedings of AAAI-83. Washington, D.C., August, 1983. pp. 270-273. [20] Relchman, Rachel. "Conversational Coherency." Cognitive Science 2. 4 (1978). 283-327. [21] Relchman. Rachel. Plain Speaking: A Theory and Grammar of 3pontaneo~s Discourse. Ph.D Th.. Harvard Umverslty, 1981. Also, Technical Report No. 4861, Bolt Beranek and Newman Inc. [22] Ringle. Martin and Bertram Bruce. Conversation Failure. In Knowledge Representation and Hatlzral Language Processing, W. Lehnart and M. RIngle. Ed.,Lawrence Erlbaum Assocmtes, 1981. [:~3] Sidner. C L.. and Israel, D.J. Recogmzmg antended meamng and speaker's plans. Proceedings of the Internatlone, l Joint Conference In Artlfictal Intelhgence. The International Joint Conferences on Artlficai Intelligence. Vancouver. B.C. August. 1981, pp. 203-208. [24] Sldner, Candace Lee. To,yards cz Computational Theory of Definite AnaphoTa Comprehension in English Discourse. Ph.D. Th., Massachusetts Instltute of Technology, 1979. Also, Report No. TR-537, MIT AI Lab [25] Sidner, C. L.. M. Bates, R. J. Bobrow, R. J. Brachman, P. R. Cohen, D. J. Israel, J. Schmolze. B. L. Webber, W. A. Woods. Research an Knowledge Representatlon for Natural Language Understanding Report No. 4785, Bolt Beranek and Newman Inc.. November, 1981. [26] Sidner, C. L.. Bates, M.. Bobrow. R.. Goodman, B.. Haas, A.. Ingrla, R.. Israel, D.. McAllester. D.. Moser, M.. Schmolze, J.. Vilem, M. Research an Knowledge Representation for Natural Language Understanding - Annual Report. I September 1982 - 31 August 1983. Technical Report 5421. BBN Laboratories. Cambradge. MA, 1983. [27] Sidner. C., Goodman. B.. Haas, A.. Moser. M.. Stallard, D.. Vilem, M. Research m Knowledge Representation for Natural Language Understanding - Annual Report, I September 1983 - 31 August 1984. Technical Report 5894. BBN Laboratorles Inc., Cambrldge, MA, 1984. [28] Webber, Bonnle Lynn. A Forma~ App~'oach to /~.scourse Anapho1"a. Ph.D. Th., Harvard University. 1978. Also, Techmcal Report No. 3761. Bolt Beranek and Newman Inc. 217
1985
26
ANAPHORA RESOLUTION: SHORT-TERM MEMORY AND FOCUSING RAYMONDE GUINDON Microeleotronlcs and Computer Technology Corporation (MCC) 9430 Research Blvd. Austin, Texas ?8759. ABSTRACT INTRODUCTION Anaphora resolution is the process of determining the referent of ~uaphors. such as definite noun phrases and pronouns, in a discourse. Computational linguists, in modeling the process of anaphora resolution, have proposed the notion of focusing. Focusing is the process, engaged in by a reader, of selecting a subset of the discourse items and maJ£ing them highly available for further computations. This paper provides a cognitive basis for anaphora resolution and focusing. Human memory is divided into a short-term, an operating, and a long-term memory. Short-term memory can only contain a small number of meaning units and its retrieval time is fast. Short-term memory is divided into a cache and a buffer. The cache contains a subset of meaning units expressed in the previous sentences and the buffer holds a representation of ~he incoming sentence. Focusing is realized in ~he cache that contains a subset of the most topical units and a subset of the mos~ recent units in the text. The information stored in the cache is used to integrate ~he incoming sentence with the preceding discourse. Pronouns should be used to refer to units in focus. Operating memory contains a very large number of units but its re~rleval time is slow. It contains the previous tex~ units that are not in the cache. It comprises the tex~ units not in focus. Definite noun phrases should be used to refer to unite not in focus. Two empirical studies are described that demonstrate the cognitive basis for focusing, the use of definite noun phrases to refer to antecedents not in focus, and the use of pronouns to refer to antecedents in focus. The goal of thls research is to show the relation between the psychological work on anaphora resolution based on the notion of a limited short-term or working memory and the computational linguistics work based on the notion of focusing. This rapprochement is important for the following reasons: I) From a theoretical viewpoint. cognitive evidence increases the validity of the computational notion of focus. 2) Focusing corresponds to one of the reader's comprehension processes and it needs to be incorporated in the model of the user in language understanding systems to adequately resolve am~iguitles in the user's utterances and to handle language generation. FOCUSING IN COMPUTATIONAL LINGUISTICS According to Grosz (1977). who was interested in ~he resolution of definite noun phrases, focusing is the process. engaged in by participants in a discourse, of highlighting a subset of their shared reality. Grosz. Joshi. and weinstein (1983) distinguish between two levels of focus, global focus and centerimg. Global focusing is a major factor in maintaining global coherence and in the interpretation of definite noun phrases. Centering is a major factor in maintaining local coherence and in the interpretation of pronouns. Grosz. Joshi. and Weinstein further define the notion of centering. Each sentence has two types of centers whose purpose is to integrate the sentence to the discourse. The backward-looking center links the current sentence to the preceding discourse. The set of forward-looklng centers provides th~ set of entities to which further anaphors m~y refer. The b6okw~rd-looklng center corresponds, roughly.- to Sidner's focus and the forward-looklng centers to Sidner's potentla~l fool. ~8 • L One principle derived by Grosz, Joshl, and Weins~ein is the following: if the b~okward-looking center of the ourren~ utterance is the same as the baokward-looklng cen~er of the previous utterance, a pronoun should be used. In other words, if there are no ~oplc shifts, continue to refer to the same entity by using a pronoun. However, violations of- this principle have been presented in Grosz (1977) and noted in Grosz, Joshl, and Welns~eln (198~). They have shown that pronouns are sometimes used to refer to entities mentioned many sentences back, even though the backward-looklng center of intervening sentences has been changed by topic shifts. Sidner (19V9. 1983) has proposed the notion of focus in the context of interpreting anaphors, especially pronouns. In Sidner's theory, an anaphor neither refers ~o another word nor co-refers to another word. but rather co-specifies a cognitive elemen~ in the reader's mind. Moreover. a theory of an&phora resolution must predict the pattern of reader's correct and incorrect choices of co-specifiers and ~he failures ~o understand. This view makes explicit the consideration of the reader's mental model and inferential capa~bili~ies A sEetch of Sidner's focusing process follows. First. an initial focus is selected on the basis of syntactic features and thematic roles indicating toplc~lity in the flrs~ sentence. Other elements introduced in the sentence are stored as potential loci for later sentences. When an anaphorlc expression is encountered. this focus is tested as a co-speclfler for ~he anaphor. It has to satisfy syntaotlo res~rlo~ions on co-references (L~snlk, 1976), semantic seleo~ional restrlo~ions (Katz and Fodor, 1963), and pragmatic plausibility oons~raln~s expressed in the remainder of the sentence. If the focus fails ~s a co-speclfier for the ~n~phor, the potential fool are tried in turn. At the same time, the new elements introduced in the sentence are stored as potential loci for later sentences. Third, the focus is updated to the selected co-speclfler for the anaphor. If the focus has changed, a topic s~ift has occurred. The second and third s~eps are cyclically applied after each sentence. The advantage of using a focus mechanism is tha~ it priorltlzes and restrlc~s the search for a co-speclfier, and as a consequence, reduces the oomputatlon~l costs assoolated with inferential processing when testln~ the applicability of the oo-speclfler to the anaphor. COGNITIVE STUDIES OF ANAPHORA RESOLUTION A few representative empirical studies of anaphora resolution are described below. All the experimental par~dlgms used share the following assumptions: 1) human memory is func~ionally or s~ruc~urally divided into at least two types of memories, a short-term memory with small storage capacity but very fast retrieval time and a long-term memory with very large s~orage capacity but slow retrieval time: 2) a topic shift transfers the units currently in short-term memory to lon~-term memory: 3) ~n anaphor transfers its referent from long-term memory to short-term memory (i.e. reinstates its referent), if it was not already in short-term memory. The first assumption is crucial. Other things being equal, computations involving retrieval from short-term memory will be faster than those involving retrieval from long-term memory. Turning to the second assumption, topic shifts have been found to be induced wlth a varify of linguls%ic devices. One of the devices is the introduction of intervening sentences between the referent and its anaphor. The intervening sentences are unrelated zo the referent but related to the overall text. Another device is the specification of a temporal or Spatial parameter that is outside the normal range of a situation. When describing a dinner, the phrase "Five hours later," signals ~h~t the topic of conversation is no longer the dinner. Another device is the use of an anaphor, frequently a deflnlte noun phrase, to refer to an antecedent tha~ is not currently the topic of conversation bu~ is in the "background". Finally, there is the use of key phrases to signal a diversion in the flow of discourse, such as "Let's turn to.". as documented in Relchman (1978, 1984). The general pattern for the material used in these experiments is the following. A~ the beginning of the tex~ appears a sentence containing a referent (e.g. biologist). For example, "The mission included a biologist". Then, if ~he referent should not be in focus, the nex~ sentence or sentences indloate a topic shift as described aJ3ovs (e.g. ~9 unrelated intervening sentences). If the referent should be in focus, no devices for topic shifts are used. The following sentence then contains an an&phor (e.g. scientist, he) to the focused or non-focused referent (e.g. biologist). For example, "The scientlst collected samples from the cultures". Another example is shown in Table 1 of this paper. Carpenter and Just (1977) used eye traoklng with other converging techniques to study anaphora resolution. Wlth eye tracking, one can monitor very precisely the trajectory of the eyes, with their forward and regressive movements, and the duration of eye fixations on small segments of the te~. The assumption behind using this technique is that eye movements are closely related to higher level cognitive activities such as comprehension. Therefore. one can expect longer fixation durations on text segments requiring additional processing to be comprehended and one can expect the eye movement pattern to mirror the selective pickup of important information in the text. They performed a series of experiments testln~ the effect of recency of a referent on the time course of anaphora resolution. Indirectly. they tested the effect of recency on the availability of an item in short-term memory. They presented texts where the number of sentences between the referent and the anaphor was varied from zero to three. The subjects read each sentence and. after the sentence, had to decide whether it was consistent or inconsistent with the previous sentences. The consistency Judgment times and the eye fixations were recorded. The consistency Judgment task, used as converging evidence with the eye movement technique, is believed to induce the subjects to integrate each new sentence and should pars,llel the difficulty of ~phora resolution. The overall reading time of the ~n&phorlo sentence was measured using the eye tracking technique. Each of these tasks should be faster if the referent was in short-term memory than if the referent was in long-term memory. Response times for the consistency Judgments and reading times of the anaphorlc sentences increased as the n-mher of intervening sentences increased. The sharpest difference appeared between zero and one intervening sentence. Gaze durations within the anaphorlo sentence were shorter when there were no intervenlng sentences th~n in the other conditions. These results show not only that &naphora resolution is easier when the referent is nearer the ~naphor but also that one intervenln E sentence may be sufflolent to produce a topic shift. Clare and Sengul (1979) used the sentence reading time technique to study anaphora resolution. In this technique. subjects control the onset and offset of the presentation of a sentence by pressing a button. The subjects are instructed to press the button to see a new sentence as soon as they have understood the current sentence. The assumption behind this technique is that additional processing required for comprehension will increase sentence reading time. Clare and Sengul (1979) measured the reading time of a sentence containing an anaphor. They distinguished between two models of the effect of recency of a referent on the speed of ~naphora resolution. In the first model, called the "continuity model", entities mentioned in the discourse are searched backward from the last one. One should expect monotonically increasing re~din~ time as the searched entity is farther back. In the second model, called the "discontinuity model", entities mentioned in the current or last sentence are kept in short-term memory and accessed first. All the entities that are further back are more likely to be in long-term memory (and not in shor~-term memory) and accessed second. Subjects rea~ short paragraphs where a referent could be separated from the anaphor by zero ~o two intervenin~ sentences. The readln~ time of ~he sentence containing the anaphor was fast when the referent was in the immediately preceding sentence but ~allx ~ when it was two or three sentences before. This finding supports the discontinuity model. Entities in the last processing cycle are more likely to be kept in short-term memory than entities in previously processed cycles. Once a tex~ entity is not in short-term, the number of intervening sentences does not affect the speed of an~phora resolution. Lesgold, Roth, and Curtis (1979), who related the linguistic notion of foregrounding (Chafe, 1972) to ~he psychological notion of short-term memory, performed a series of experiments similar ~o those of Clark ~nd Sengul (1979), using more varied ways to produce topic shifts, and replicated the above findings. 220 McKoon and gatoliff (1980) used an activation procedure based on Chang (1980). A desoriptlon of the baslo paradigm and its underlying loglo follows. When one reads a text, only a small part of the text information is stored in short-term memory and most of the information is stored in long-term memory. This is due to the very small storage capacity of short-term memory (7 t2 chunEs; Miller, 1956). Given that retrieval time in short-term memory is much faster than retrieval time in long-term memory, it will tame longer to remember something from the text if the memory is stored in long-term memory than in short-term memory. In their study, subjects read a paragraph sentence by sentence. Immediately after the last sentence, the subjects were presented with a single word and the subjects had to remember whether the word had appeared previously in the text or not (an old-new recognition). If the tested word was still in short-term memory, the old-new recognition time should be faster than if it was in long-term memory. To test this hypothesis, the paragraphs were constructed in the following manner. The referent (e.g. burglar) was separated from the anaphor by either zero or ~wo in~ervenlng sentences. The anaphor appeared in the last sentence of the paragraph. The last sentence was presented in one of three versions: i) the subject of the sentence was a repetition (i.e. burglar) of the referent in the first sentence (anaphorio-identioal); 2) the subject was the name of the category- (e.g. criminal) in which the referent belonged (anaphorlc- category); 3) the subject was a noun (e.g. ca~) unrelated ~o the referent (non-anaphoric). During the experimental trials, the "referent" (i.e. burglar) was presented immediately after the last sentence for an old-new recognition. Assuming that an anaphor activates its referent by making it available in short-term memory, one can expect slgnifloantly faster old-new recognition times for "burglar" in the anaphorlc-ca~egory oondi~lon than in the non-anaphorlo condition. This prediction was observed. Surprisingly, the number of intervening sentences did not have an effect. This suggests that the two intervening sentences did not remove the referent from short-term memory (i.e. "backgrounds" the referent). It is probably not the case. Rather. i~ is llkely that by testing the referent at the end of the clause, as opposed to when the anaphor is encountered, the referent had time to be reinstated in shor~-term memory and be highly available. This is an important point. The activation procedure was not on-llne since the old-new recognition ocoured at the ~n~ of the sentence as opposed to M~ll~ the sentence was read and the anaphor encountered. Another initially surprising effect was that ~he old-new recognition times for the referents were slower in the zero intervening sentences when the anaphor was a repetition of the referent itself than when the anaphor was the category name. This last result suggests that it is not appropriate to use a definite noun phrase, especially a repetition of the referent, to refer to a antecedent in short-term memory. As explained previously, intervening sentences are not the only devices that transfer text units from short-term to long-term memory. Stereotypical situations have spatial and temporal parameters with legal ranges of values. If one specifies a spatial or ~emporal value outside these ranges, a scenario-shift occurs. For example. Anderson (in Sanford and Garrodo 1981) constructed texts about stereotypical situations such as going to a restaurant. In one sentence of the text, there was a reference to a character related to the script, say a waiter. AZ the beginning of the next sentence, there was a mention of a temporal or spatial parameter, such as "One hour later" or "Five hours la~er". In the flrs~ case the parameter is within the range defining the scrip~, in the second case it is not. The rest of ~he sentence contained an anaphor to the previously mentioned character, the walter. Measumlng ~he reading time of the anaphorlo sentence. Anderson showed longer reading time when the spatial or temporal parameter w~s outside the range of the script th~n inside. This suggests that the referent was transfered from short-term to long-term memory by the scenarlo-shlft and it took longer ~o retrieve the referent during anaphora resolution. The results from all these experiments support the notion tha~ an anaphor activates its referent by malting it highly available in short-term memory and ~hat topic shifts transfer units from short-term memory to long-term memoz'y. However. none of these studles~ except some eye movement siudles. provide data on ~ anaphora resolution occurs during the reading of a sentence ~nd when i~ ooou2s in relation to the 2~ lexioal, syntactic. pragmatic analyses. semantic, and COGNITIVE BASIS FOR FOCUSING A sketch of a cognitive model of anaphora resolution is offered here. It has been heavily influenced by the short-term~long-term memory model of Kintsch and van DiJk (19~8) and especially its leading edge strategy. ~tructure ~f ~ memg/~ Analogically, human memory can be conceptualized as a three level structure similar to the memory of most mini and main frame computers. It consists of a small, very fast memory called short-term memory (STM); a relatively larger main or operating memory (OM): and a vast store of general world knowledge called long-term memory (LTM). The total STM is only large enough to contain 7t2 chunks of information at any one time (Simon, 1974; Miller. 1956). The resources for STM are dynamically allocated to one of two uses. First, par~ of the STM is used to store the incoming sentence or clause. This is a temporary storage of the sentence or clause before further processing and is called the STM buffer. The second part of STM is called the STM cache. It is used to hold over. from one sentence or clause to the next. the information necessary to provide local and global coherence. It contains a subset of the previous text items that are topical and a subset of those that are recent. Retrieval times from short-term memory are very fast. Conceptually. operating memory is the subset of the world knowledge in long-term memory which is deemed relev~n~ to the prooesslng of the current par~ of the text. It also contains the growing memory structure oorrsspondin~ to the tex~ read so f~r. I~ contains the less topical and less recent information from the text. Retrieval times are much longer than for short-term memory. The time course of anaphora resolution is greatly determined by the current content of shor~-term memory and of operating memory. Moroever, pronouns and definite noun phrases are resolved using different s~rategies. Cache ~ . During the input of a sentence into the buffer ~nd the concurrent integration of the sentence into the cache, a subset of the semantic units held in the STM is selected to be held over in the cache for the next cycle. Following Elntsch and van Dijk (1978), the cache management strategy selects a subset T of the most topical items and a subset R of the most recent items to be held over in the cache. The selection strategy aims at m~xlmizin~ the probability that an anaphor in the nex~ sentence will refer to a semantic unit held in the cache. Cache management is applied after each sentence or clause. Pronouns and definite noun phrases are resolved using different strategies, we will describe four cases: i. The anaphor is a definite noun phrase and the referent is not in focus, that is, i~ is in operating memory, 2. The anaphor is a definite noun phrase and the referent is in focus, that is. it is in the cache. 3. The anaphor is a pronoun and the referent is in the cache (in fOCUS). 4. The anaphor is a pronoun and the referent is in operating memory (not in focus). It is hypothesized that the explicitness of sm anaphor is a signal. used by the readier, which denotes whether the referent is in the cache or in operating memory. If the ~naphor is a definite noun phrase, operating memory is searched immediately. If the referent is in operating memory it is then reinstated into the cache. A topic shift has occured. If the anaphor is a definite noun phrase and the referent is in focus (i.e. in the cache), anaphora resolution will be hindered. The reader searches operating memory while the referent is in short-term memory. Correspondingly. this violates a rule of cooperative communication: use a definite noun phrase to refer to an ~ntecedent not in focus. The definite noun phrase signals a topic shift, while in fact. the same entity is being talked about. 999 If the anaphor is a pronoun, the cache is searched for a plausible referent. If found, mnaphora resolution is completed. Because cache management is based on topioallty and recency, pronouns can refer to ~he main ~opio of ~he text even when the main ~opio has no~ been mentioned directly for mamy sentences. Unless there is a global ~opic shift, the main topic in the cache remains unchanged throughout ~he text. If the anaphor is a pronoun but no referent is found in the cache, it is then necessary to search operating memory. If a referen~ is found in operating memory, it is reinstated into the c~che. A ~opic shift has occured. Using a pronoun ~o refer ~o information in operating memory is de~rlmental ~o amaphora resolution. The reader first searches the cache. ~hen ~he operating memory, and ~hen has ~o relnst~te ~he referent into the cache. COMPARISONS A clear relation exists between ~he notion of focusing proposed in computational linguistics and the model of human memory and discourse processing proposed in cognitive psychology. The Q~h~ is used to store the items in f.~Q~. Given the small number of items stored in the cache, a sketchy anaphor such as ~ ~ is sufflclen~ to retrieve the referent. The cache management strategy in human .memory is aimed at maximizing ~he probability that the cache contains the information relevant to the next cycle of computation. The cache, by containing topical and recen~ i~ems, allows to maintain global and local coherence. Q ~ m~y~ is used ~o store items that are not in f~A~. Because the set of items is large, an informative descrlp~ion of the Item ~o be searched for is needed. D~fi~i~ ~ou~ ~h/~es a/e used ~o indlc~te to the reader ~ha~ the i~em is no~ in focus. thus In operating memory. Other things being equal, it will tame more time to retrieve an item from operating memory than from the cache. The referent will need to be reinstated into the cache. This will produce a topic shift. The reinstated referent is then highly available and can be referred to by using a pronoun. TWO ON-LINE STUDIES OF ANAPHORA RESOLUTION The presented studies test the notion tha~ focus is cognltively realized in the reader's limited short-term memory. They also test Grosz. Joshl. and Welnsteln's claim that definite noun phrases, and not pronouns, should be used to refer ~o items no longer in focus and ~hat pronouns, ~nd not definite noun phrases, should be used to refer to items in focus. Moreover, if one assumes that the content of short-term memory is dynamically updated on ~he basis of recency ~nd topicality, one can explain why pronouns can be used to refer Zo recent items ~nd also to topical non-recen~ items. A new technique, called Q~zli~ ~ i Q n , was developed specifically to provide the empirical da~a for these studies. The on-line activation technique can be compared to "closely" tracing the execution of ~ program. In the on-line activation technique, passages are presented using rapid serial visual presentation (RSVP), one word a~ a time. In ~dditlon to reading each text. the participants were also given the ~ask to recognize whether some specially marked words, presented surreptitiously wi~hln ~he ~ext, had appemred before in the tex~ or not. Some of ~hese special words were presented before in the text and others were not.. We will call ~hese specially marked words zest words. This task is called am old-new recognition task. The passages contained anaphors referring ~o antecedents which were either in focus or not in focus. An antecedent was removed from focus by introducing a topic shift, with ~he restrlc~ion that the antecedent was not the main topic of the discourse. An example ter~ is presented in table I. Note that only one of the alternative sentences 5a, 5b. or 50 was presented for each text to the participants of the study. In each text. one of the test words was the referent of the anaphor. At some point before or after the anaphor was presented on the CRT, its referent was presented for old-new recognition and recognition times and errors were collected. The delay between the onse~ of the anaphor and the onset of the test word is called the stimulus onset asynchrony (SOA). The ~naphor is acting as a prime, which should activate the referent. The old-new recognition time for the referent test word indicates 223 whether the referent is in the cache or in operating memory. TABLE 1 EXAMPLE OF TEXTS WITH ANTECEDENTS IN FOCUS AND NOT IN FOCUS Antecedent: thermometer Anaphor: instrument Antecedent in Focus 1- The assistant was preparing solutions for a chemistry experiment. 2- The experiment would take at least four hours. 3- There would then be a ten hour wait for the reaction to complete. 4- He measured the temperature of a solution using a thermometer. 5a- The thin instrument was not giving the expected re~ding. 5b- A broken instrument was not giving the expected reading. 5c- The compuzer terminal was not giving the expected reading. Antecedent not in Focus I- The assistant was preparing solutions for a chemistry experiment. 2- He measured the temperature of a solution using a thermometer. 3- The experiment would take at least four hours. 4- There would then be a ten hour w~it for the reaction to complete. 5a- The thin instrument was not givlng the expected reading. 5b- A broken instrument was not giving the expected reading. 50- The computer terminal was not giving the expected reading. In addition, there were three types of primes, as shown in sentences 5a, 5b, 8~d 5o in Table i. The prime could be either semantically related and referential (S+R÷) ~ in 5a, semantically related and not referential (S+R-) as in 5b, or semantically unrelated and not referential (S-R-) as in 5c. In the S÷R÷ condition, the prime is the an~phor. The two conditions S÷R- and S-R- were control conditions to separate the effect of semantic priming, due ~o semantic ~ssociation between the anaphor and the referent, on the old-new reccgnltlon for referents. A schema of the procedure is shown in Table 2. The words surrounded by stars a~e the test words. TABLE 2 SCHEMA OF THE PROCEDURE SOAs Before ~50 msec 1250 msec Time T1 The The The T2 thin thin thin T~ "thermometer* instrument instrument T4 instrument *thermometer" was T5 was was not T8 not not giving T7 giving giving *thermometer" The predictions were: I. If a referent is not in focus, due to a topic shift, the ooourenoe of the anaphor should reinstate the referent into the cache, leading to faster old-new recognition times. In terms of the experimental conditions, there should be a decrease in old-new recognition time at the 350 and 1250 msec conditions in the S+R÷ condition (i.e. after the anaphor), but not in the S+R- and S-R- conditions, which are not anaphorio. 2. The use of a definite noun phrase to refer to an antecedent in the cache (i.e. in focus) should be detrimental to anaphora resolution. IZ should slow down the recognition of the referent as old or new. In terms of the ex~erlmental conditions, if the referent is in focus, the old-new recognition times in the 350 and 1250 msec SOA conditions should be slower than in the before SOA coD~Litlon. Method ELT.TJ,~,pA~ There p~rtioipants in this study. were 36 ~/~I~ There were 36 experimental texts. They contained as a referent an instance of a cl~ss (e.g. thermometer) to be used later as a test word, and a~ an an~phor the class name (e.g. instrument). In this study, the an~phor w~s a definite noun phrase. An example of the material was presented in Table i. There were three p~imlng oondltlons, S+R+. S+R-, and S-R-, 224 exemplified respectively by sentences 5a, 5b, and 50. During the presentation of each text. two or three test words were presented, one experimental and one or two fillers. The filler words were presented at semi-random locations in the text. In the entire experiment there was an equal number of old and new test words. ~r~re The experiment was computer-controlled using real-tlme routines on the VAX/VMS 11/780 of the Computer Laboratory for Instruction in Psychological Research at the University of Colorado. Each participant sat in front of a CRT screen with a keyboard which had a "yes" button on the right. for old test words, and a "no" button on the left. for new test words. The tex~s were presented using RSVP. with each word presented in the center of the screen for 300 msec. The participants were asked to recognize whether the test words were old or new. as fast as possible but without making mistakes. D~i~ There were 36 experimental texts and 18 experimental conditions. The first manipulation was the focusing of the referent: in focus or not in focus. The second manipulation was the SOA: immediately before ~he prime. 350 msec after. 1250 msec after. The third manipulation was priming: S+R+. S+R-. S-R-. The design was completely within-subject, with two texts randomly assigned to each experimental condition usin~ two randomly sampled 18 by 18 Latin Squares. Each participant was randomly assigned to a row of the Latin Squares. ~sul~a and D~.scusl~ The predicted interaction of focusing and priming is shown in Figure I: the prime in the S+R+ condition (i.e. the anaphor) reinstates the referent into the cache, focusing it. while ~he referent is not relnstazed in the non-referentlal conditions. E(2.70) = 3.6. ~ , 0.04. MSe = 213~21 by subjects and E(2.70) = 2.5, ~ , O.Og, MSe = 277568 by items. A priori comparisons show that the difference between the recogniticn times in the two focus conditions in ~he S¢R+ condition is much smaller than in the other two priming conditions, S-R- and S-R-. which do mot differ between themselves, ~(35) ° 2.6. ~ , 0.01. MSe - 87 by subjects, and ~(35) = 2.14. ; , 0.02, MSe = 114 by items. These resul~s support the notions that i~ems in focus are more accessible than items not in focus and that focus is realized into the cache. They also support the notion that an anaphor reinstates a referent not in focus and does so by transferring the r@ferent to the cache. L A T E N C I E S (msec) FIGURE I. 1345. 1305. 1265 m Not in Focus 1225 ~ In Focus 1185 1145 1105 1065 1025 S+R+S*R-S-R- PRIMING Recognition latencies az each focus and priming condition. An a priori comparison demonstrates that using a definite noun phrase ~o refer to an item in focus hinders anaphora resolution. What seems ~o happen is a surprize effect caused by the violation of a linguistic usage relating the form of the anaphor to the fOCUS S~atus of its referent. The recognition time for the referent, in the focus condition, was longer at the 350 msec and 1250 msec SOAs than in the before SOA. ~(35) - -4.1. R ~ 0.001. MSe - 24 by subjects, and ~(35) - -2.9, , 0.008. MSe - 31 by items. This is shown in Figure 2. L 1345. A 1305- T E 1265- N C 1225- I 1185- E (msec) 1105- 1065- 1025 • before 350 1250 SOA FIGURE 2. (~sec) Recognition la~encies a~ each SOA for a referent in focus. 225 In another study (Gulndon, 1982), using ~he same on-llne ~c~iva~ion technique, the ~c~ivation of an antecedent by a pronoun was ~raced. In this study, it was fo%L~d tha~ referring ~o an anteceden~ not in focus by using a pronoun was detrimental to anaphora resolution. The delay between reading the anaphor and reins~atlng the an~eceden~ was as long as 2400 msec. The actlva~ion of an anteceden~ no~ in focus by a pronoun takes a long ~ime because ~he reader is induced: I) to search the cache unsuocesfully; 2) to search operating memory with a "sketchy" pronoun: 3) to relnstaZe the referent into the cache. Activation was immediate for ~he antecedents in focus. As opposed ~o the previous s~udy where referring to a focused referen~ using a definite noun phrase hindered anaphora resolution, no such effect was observed when using a pronoun. This is expected since pronouns signal tha~ ~he referent is in the cache. SUMMARY The notion of focusing and the notion that the form of the anaphor signals whether ~he referen~ is in focus or no~ have cognitive support. Items in focus are items in the cache which is dynamically updated ~o contain ~he T most ~opical and the R most recen~ items in the ~ex~. Because the cache con~alns few items, pronouns should be used ~o refer to items in focus. O~her things being equal, anaphora resolution will be easier if the antecedent is in focus, because ~he retrieval ~imes from the cache are much faster ~han those from the operating memory. Antecedents not in focus are in operating memory. I~ems no~ in focus are in operating memory. A definite noun phrase, because it is more descriptive ~han a pronoun, should be used to re~rieve the ~nteceden~ from ~he large set of i~ems in operating memory. However, because ~he reErleval ~ime is slow in opera~in~ memory, anaphora resolution is more dlfflcul~ for i~ems ~ha~ are no~ in focus. The relns~a~emen~ of am an~eceden~ into ~he cache effects a ~oplc shift. The on-llne activation ~echnique was developed specifically to provide empirical data on the no~ion of focus. The ~dvan~age of this technique over conventional memory experiments is that one can ~est precisely the ~emporal properties of various analyses and processes occurring durln~ sentence and ter~ comprehension. This technique can be used to distinguish between different models of anaphora resolution when ~hese models are no~ easily distinguished on the basis of discourse or dialogue an~iysls. REFERENCES CarpenZer, P.A. ~ Just. M.A. lnZegraZlve processes in comprehension. In D. LaBer~e ~.j. Samuels (Eds.), ~i~ P/.OD~ses i~ E.~,~&L'LD~. Hillsdale, N.g.: Erlbaum, 1977. Chafe, w. Discourse structure and human knowledge. In J.B. Carroll ~ R.O. Freedle "(Eds.), L ~ n ~ co~mmhmna~mn ~n~ ~hm amg~l~imn of ~ i ~ . Washington: Winston. 1972. Chang, F. AcZive memory processes in sentence comprehension: Clause effects and pronominal reference. ~m~ ~n~ G~ni~n, 1980, 8, 58 - 64. Clark, H.H., ~ Sengul. C.J. In search of referents for nouns and pronouns. ~m~=~ ~n~ C_Q~nl~i~n. 1979, Z, 35 - ~I. van DiJk, T.A. ~ Kintsch. W. ~X~gi~S ~f ~l~cou=g2 G~m~x~h~si~a. New YorE: Academic Press, 1963. af focus in ~i~la~um ~n~nGin~. Technical No~e 151. Artificial InZeiligence Center, SRI, 1977. Grosz, B.J.. Joshi, A.K., ~ WeinsZein. S. Providing a unified account of ~ ~ nmm ~ in ~lggm~na~. Technical No~e ~92, Artificial In~elligence Center, SRI, 1983. Gulndon, R. Q~=Ii~ ~oc~ing ~f ~i~s~Sm~3 searcheS. Unpublished manuscript. Universl~y of Colorado. Boulder, 1962. Guindon, R. ~hz ~ff~ct of re~en~ ~n~ Doctoral Dissertation. University of Colorado, Boulder. 198~. Just, M.A. ~ Carpenter. P.A. A theory of rea~ing: From eye fixations to comprehension. ~chologi~l Ke.J.l~, IgBO. ~Z, 329 - 3S4. E~z, J.J. ~ Fodor, J.A. The of a sem~nZlc ~heory. 1963, ~, 170 - 210. structure L~uEu~g~. 226 Ein%scho W. ~ van DiJk, T.A. Toward a model of %ex~ comprehension and production. ~ Review, 1978. 85, 363 - 394. LasniE. H. Remarks on co-reference. LinglL~ An~l~.~is, 1976, ~, 1-22. Lesgold. A.M.. Ro~h, S.F.. ~ Curtis, M.E. Foregrounding effects in discourse comprehension. ~rnal of Ver~a~ Le~Luin~ ~ Ver~l ~ n s l o n , 1979, i~, 281- 308. McEoon. G. ~ Ra%Ollff. R. The comprehension processes and memory s%ruc~ures involved in anaphorlc references. Journ~l of Y~I L~mnlng ~n~ Y~=L~I ~;~,.V.i.~, 1980. 19, 668 - 682. Miller. G.A. The magical n,~mher seven. plus or minus ~wo: Some llmi~s on our capacity for processing informaZion. ~l~gl~l E~Xi~. 1956, ~, 81 - 97. Reichman. R. Conversational coherency, C~g~i~ix~ ~i.~, 19?8, ~, 283-327. Reichman, R. Ex~ended person-machlne in%erface. & ~ l Zn~elllEenoe. 1884. ~, 157 - 218. Sanford, A.J. ~ Garrod. S.C. ~n~fi~n~in~ ~i~J~n I.~g,~&g~. New York: Wiley, 1961. ~heory of definite anaphor& ~fih~nsion i= English ~L~g~urse. Technical repor~ 537. MIT Ar%ificlal In~elllgenoe Laboratory, C~mhrldge MA, 1979. Sidner. C. Focusing in %he comprehension of definite anaphora. In M. Brady and R. C. Berwlck (Eds.). com~tQn~l ~ of ~iH~g~. Cambridge: MIT Press. 1983. Simon, E.A. Eow big is a chunk? ~Ql~fi, 1974. IE~, 482 - 488. ACKNOWLEDGMENT Thls research was performed as par% of the auZhor's doctoral dissertation while a~ ~he University of Colorado. She is ex%remely grateful for ~he help of her dissertation oommlZ~ee. Wal~er EinZsch. Peter Polson, Alice Healy. Richard Olson. AndrzeJ Ehrenfeuch~. Bur%on Wagner has provided many insightful comments on this paper. MCC is kindly ~hanked for ~he technlcal suppor% provided while oomposlng ~hls paper. 227
1985
27
Explana~..: 3tructures in XSEL Karen Kukich Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 412-578.2621 Kukich@CMU-CS-A 1. Introduction Expert systems provide a rich testbed from which to develop and test techniques for natural language processing. These systems capture the knowledge needed to solve real-world problems in their respective domains, and that knowledge can and should be exploited for testing computational procedures for natural language processing. Parsing. semantic ,nterpretation, dialog monitoring, discourse organization, and text gef,eration are just a few of the language processinq problems that might takeadvantage of the pre.structured semantic knowledge of an expert system. In particular, the need for explanation generation facilities for expert systems provides an opportunity to explore the relationships between the underlying knowleqge structures needed for automated reasoning and those needed for natural language processing. One such exploration was the development of an explanation generator for XSEL, which is an expert system that hellos a salesperson in producing a purchase order for a computer system[10]. This pager describes a technique called "link-dependent message generation" that forms the basis for explanation generation in XSEL. 1.1. Overview of XSEL Briefly, the function of the XSEL system is to assist a salesperson in configuring a custom-tailored purchase order for a Digital Equipment Corporation VAX computer system. XSEL works with the salesperson tO elicit the functional computing requirements of the individual customer, and then goes on to select the components that best fit those requirements. The output of an XSEL session is a purchase order consisting of a list of line-items that specify hardware and software components. There ~re two main phases to XSEL's processincj, a fact gathering phase and a component select=on phase. During the fact gathering phase XSEL carries on an interactive dialog with the salesperson to elicit values for facts that determine the customer's functional computing requirements. These might include requirements for total disk space, percent of removable disk storage, number of terminals, lines-per.minute of printing, etc. Natural language processing during the fact gathering dialog is minimal: XSEL displays menues and pre-formutated queries and accepts one- or two-word answers from the user. Once enough facts have been collected XSEL begins a silent phase of processing. During this phase a set of candidate components that satisfy the customer's basic requirements is retrieved from the DEC parts database. Within each class of component, i.e., processor, disk, terminal, etc., candidates are ranked according to their score on a~q evaluation function that measures the degree to which a candidate satisfies the customer's weighted functional requirements. The candidate with the highest score is selected and placed on the purchase order. The most important knowledge structure used by XSEL during the fact gathering I~ase is a fact. A fact is simply a list of attribute-value pairs that represent knowledge about one of the customer's functional computing requirements. Figure 1-1 depicts a sample facL (FACT ?ATTRIBUTE TOTAL.DISK-SPACE ?STATUS INFERENCE TCLASS DISK ?UNITS MEGAB~'TE3 ?MEAN 3600 YTOKEN G'.29) Figure 1.1: Sample XSEL Fact 228 The fact collection process is driven by backward-chaining rules. A top-level rule deposits a few "core" facts for which XSEL must obtain values, such as "total.disk-space", "total-number.of- terminals", etc. One at a time, XSEL solicits a value for these core facts from the salesperson. If the salesperson answers "unknown" to a solicitation, another rule fires to deposit some additional facts that would enable XSEL to infer a value for the unknown fact. The cycle is then repeated as XSEL solicits values for each of the newly deposited facts. Any time a newly instantiated fact completes the set of facts required to infer a value for some other fact. the appropriate inference rule is automatically triggered and the value for another fact is inferred. This backward-chaining process continues until XSEL obtains values for all of the core facts, or until no more data can be collected and no more inferences can be made, in which case some default value rules fire to instantiate values for any remaining unknown facts. The most important knowledge structure used by XSEL during the component selection phase is a rank element. Like a fact, a rank element is simply a list of atthbute.value palm. In this case the attribute-value pairs represent knowledge about a candidate's score for one term in the evaluation function. A different evaluation function is associated with each class of component. and each evaluation function is a sum of some weighted terms. The terms of the evaluation function for the class disk, for example, include price, disk-pack-type, storage-capacity, average-access-time, peak-transfer-rate, and handednesa. For every candidate, XSEL computes a rank value for each term in the evaluation function. The rank value for a term is the product of the candidate's normalized SCore for the term and a weight which represents an importance factor. The essential information needed to compute a rank value for a term for a candidate is stored in a rank element, an example of which is shown in Figure 1-2. (RANK tRANK-NAME AVERAGE.ACCESS-TIME tNAME RA60-AA" tCLASS DISK fRANK-VALUE -3 tCOEFFICIENT t tVALUE 50 tlMPORTANCE I tTOKEN G:9) Figure 1-2: Sample XSEL Rank After aJl the rank values have been computed for a candidate they are summed to obtain a total score for the candidate. The candidate with the highest total score is selected and placed on the purchase order. The component selection phase is driven by forward.chaining rules. These rules perform the subtasks of first, retrieving candidates from the database, next, determining a quantity and cost for each of the candidates, next, computing a total rank score for each candidate, and finally, selecting the candidate with the highest rank score. At present, the entire XSEL system consists of over three thousand OPS5 [2] rules. The explanation generator, which will be described shortly, comprises an additional five hundred rules. Anywhere from approximately five hundred to five thousand rules may fire during the fact gathering phase to create from fifty to five hundred facts, and roughly three thousand rules will fire during the component selection phase to create around one thousand rank elements. The whole process can take anywhere from ten to thirty minutes of real time, depending on how XSEL's queries are answered. t .2. Sample Explanations Three of the most obvious types of queries a user m~ght ask were targeted for initial explanation development. Sample explanations from each of those types are given in this section. The following sections describe the knowledge structures and processes within both XSEL and the explanation generator that produced those explanations, as well as the goals and rationale behind them. One type of query that is likely to be asked is why a particular component appears on a purchase order. We refer to queries of this type as "why-choice" queries. To answer a why-choice query the explanation generator must compare the rank elements for each candidate on each term of the evaluation function in order to determine which attributes were responsible for the higher SCore of the component that was actually selected. The following are sample explanations from the why-choice class of queries. 229 ? why ra81 THE RA81 IS CHEAPER THAN ANY ALTERNATIVE RXED PACK DISK, POSSIBLY BECAUSE IT HAS A SMALLER TOTAL STORAGE CAPACITY AND A SLOWER AVERAGE-ACCESS-TIME. ?why rm05 ALTHOUGH THERE ARE LESS EXPENSIVE DISK S, THE RM05 HAS A LARGER DISK PACK THAN ANY ALTERNATIVE REMOVABLE PACK DISK. Figure 1-3: Sample Why-Choice Explanations A second obvious type of query asks why a certain fact has whatever value it has. e.g., why total-disk.space is 3600 megabytes. We refer to queries in this class as "why-lact" queries. In the case of why-fact queries, the explanation generator must examine the facts that were created during the fact gathering phase, and it must determine how those facts are related through the backward-chaining process. An example of an explanation that was generated in response to a why.fact query follows: ? why q total-disk-space XSEL INFERRED A VALUE QF 3600 MEGABYTES FOR TOTAL-DISK.SPACE. 3574 MEGABYTES ARE REQUIRED FOR TOTAL.USER-DISK-SPACE. THE REMAINDER IS ACCOUNTED FOR BY OTHER FACTORS, SUCH AS SUM-OF-SYSTEM-DISK- SPACE. 3574 MEGABYTES WAS INFERRED FOR TOTAL-USER-DISK-SPACE BECAUSE 2859 MEGABYTES ARE REQUIRED FOR USER-DISK- SPACE AND THAT VALUE IS MULTIPLIED BY 125 FOR PERCENT-FOR-EXPANSION . XSEL INFERRED A VALUE OF 25 MEGABYTES FOR SUM.OF.SYSTEM.DISK-SPACE FROM 1 SYSTEM-DISK-SPACE REQUIREMENT OF 25 MEGABYTES FOR THE VMS OPERATING-SYSTEM. Figure 1,4: Sample Why-Fact Explanation This explanation would have ended immediately following the first paragraph had not the user previously asked for longer explanations. But because the user had earlier typed "explain more", the explanation generator went on to explain the terms "total-user-disk-space" and "sum.of.system.disk-space", which were introduced in the first paragraph. If the user were to type "explain more" a second time. and then ask the same question "why quantity total-disk-space", the explanation generator would not stop where it did. Instead, it would go on to explain the terms user-disk.space, percent.for-expansion, and system.disk-space, which were introduced in the second and third paragraphs, There is no upper bound on the number of levels of explanation the user may request. If the number of levels to explain is high. XSEL will keep explaining until it reaches those facts whose values were set either by user input or by default, in which case there is nothing further to explain. The user can ~lso type "explain less" at any time, thus decreasing the number of levels to explain. The lower bound on the number of levels to explain is one. The mechanism for determining which term to explain next is a queue. As new terms are introduced they are placed in the queue. The queue was originally implemented as a stack, but as explanations got longer they began to sound less coherent using the stack mechanism. So the queue was implemented, but the stack was retained. Now one can toggle between them by typing "explain queue" or "explain stack", thus producing alternatively structured explanations for the sake of comparison. The third ol~vious class of queries asks why a certain quantity is needed for any line-item. We refer to these as "why-line.item" queries, Why-line-item queries require the most complicated processing because the explanation generator must understand how the line-item that was selected relates back to the facts that determine the quantity needed, and there is usually a long sequence of forward-chaining rules as well as the whole evaluation function mechananism between the creation of the facts and the creation of the line-items. Figure 1-5 shows a sample explanation from the why-line-item class. In this example. the number of levels to explain was set at two. The first two paragrapl'~ comprise the first level, so tire explanation could have 23O stopped there; the remaining two paragraphs were generated in response to terms introduced in the first two paragraphs. ? why q ra60 4 RA60-AA" 'S WERE SELECTED IN ORDER TO SATISFY A REMOVABLE-DISK-SPACE REQUIREMENT OF 900 MEGABYTES. EACH RA60-AA" PROVIDES A CAPACITY OF 205 MEGABYTES. THEREFORE, 4 RA60-AA" 'S ARE REQUIRED TO YIELD AT LEAST 90 PERCENT OF THE REMOVABLE-DISK-SPACE CAPACITY OF 900 MEGABYTES. 900 MEGABYTES OF THE TOTAL-DISK.SPACE REQUIREMENT OF 3600 MEGABYTES WERE ALLOCATED TO REMOVABLE.DISK-SPACE . XSEL INFERRED A VALUE OF 900 MEGABYTES FOR REMOVABLE-DISK-SPACE BECAUSE 3600 MEGABYTES ARE REQUIRED FOR TOTAL-DISK. SPACE AND 2700 RXED-DISK ARE SUBTRACTED FROM IT TO GET THE DIFFERENCE . THE VALUE OF 205 MEGABYTES FOR REMOVABLE- DISI(-UNIT.CAPABILITY WAS RETRIEVED FROM THE DATABASE. Figure 1-5: Sample Why-Line.Item Explanation 2. XSEL Explanation Design Goals 2.1. Related Explanation Work The desi(jn of the XSEL explanation generator was motivated by three goals: first, that explanations should be accurate. second, that explanations should be direct, and third, that some degree of generality should be attempted. Most early attempts at explanation generation adopted either a canned text or an execution trace approach. The canned text approach led to accuracy problems and the execution trace approach led to directness problems. These problems are described in detail by Swartout[12]. In brief, canned explanations can suffer from a lack of accuracy in the event that any modifications or additions are made to the Performance program without the corresl0onding modifications or additions being made to the canned text. Execution trace.explanations tend to suffer from a lack of directness because every step during program execution gets reported, including what Swartout has referred to as "computer artifacts", as in "Variable X was initialized to 0". Another common early approach to explanation generation was the goal tree approach, which is very.similar to the execution trace approach. The original explanations produced by the MYCIN system were goal tree explanations [1]. This approach allowed the user to question any request for information made by the system, and the system would simply locate the goal immediately above the current one in the goal tree and report that it needed the information to resolve that higher goal. Goal tree explanations tend to suffer from the same lack of directness problems that execution trace explanations suffer from. Swartout's work on an explanation generator for the Digitalis Therapy Advisor attacked the accuracy and directness problems successfully. His approach was to redesign the DTA, separating descriptive facts from domain principles and from the abstract goals of the system. This allowed the performance program to be generated by an automatic programmer, which also created a goal refinement structure in the process. The goal refinement structure captures the knowledge that goes into writing the performance program, and makes it accessible to the explanation generator, where it can be used to produce explanations that are both accurate and direct. Furthermore, as Swartout points out, such explanations can be viewed as "justifications" for the system's behavior. One of the major contributions of the DTA work was to demonstrate that a singte explicit representation of knowledge can and should drive both the automatic program generation process and the explanation generation process. Further research supporting the "shared explicit knowledge" approach to automatic knowledge acquisition, rule generation, and explanation generation is underway for at least three other projects [8] [4] [5] [6]. 2.2. The XSEL Explanation Approach XSEL's approach to explanation generation differs from all of 231 the approaches discussed above. The sheer size of XSEL would make implementing canned responses tedious. Similarly, the number of rule firings on any run would make reading execution trace explanations labonous even. or perhaps especially, if they were translated into natural lanaguage. The approach taken by Swartout of extracting the regularities and representing them separately as domain principles would work for the backward- chaining rules used during XSEL's fact gathering phase, but the forward-chaining rules used during the component selection phase are so irregular that attempting to extract regularities would result in the duplication of nearly the entire set of rules. Some other common denominator needed to be found in order to achieve some computational power for explanation generation. For about two thirds of XSEL's explanation facilities, that computational power was bought by the creation of links, which are simple knowledge structures that establish relations between elements in XSEL's working memory. The role of links will be the focus of the remainder of this paper. But first a brief general overview of all the explanation facilities is given. There is a simple variant of a goal tree explanation facility built into XSEL. so that the system can always state why it wants a value for any fact it reduests during the fact gathering dialog. But the explanation samples shown in the previous section were generated by an entirely different mechanism, a message-based explanation generator. A message-based explanation generator is a two-phase processor that first generates and organizes messages based on the contents of working memory, and then maps those messages into surface strings. Two different types of message generator have been implemented for XSEL. The message generator used to answer why-choice queries may be called a comparative message generator; it examines and compares the rank elements produced by the evaluation functions to determine what roles they play in the selection of the chosen component, and then it creates a,opropriate messages, The message generators used to answer the why-fsct and why. line.item clueries may be called link-dependent message generators: they examine the facts and the links between facts to determine what relations hold among them, and then they create appropriate messages. Explanations produced by both the comparative message generator and the link-dependent message generators are certain to be accurate because they always originate from the contenfs of working memory. Special steps had to be taken to ensure the directness of the link-dependent message generators. however. Those steps will be discussed in the following sections. which describe the workings of the lipk-dependent message generators in some detail. Discussion of the comparative message generator and the surface generator will be reserved for other occasions. 3. Link-dependent Message Generation 3.1. Generic vs. Relational Explanations Both of the link-dependent message generators are capable of operating in two modes, generic mode and relational mode. (The user can toggle between modes by typing "explain generic" or "explain relational".) The explanations shown above in Figures 1-4 and 1-5 are relational explanations: they explicate the relations that hold between facts. Some of those relations are arithmetic relations, such as sum and product, and some are abstract relations, such as satisfaction and allocation relations. Contrast the relational explanation for the query "why q total- disk-space" shown in Figure 3-1 with the generic explanation for the same query shown in Figure 1-4. Generic explanations do not explicate the relations that hold between facts; they simply state that some generic dependencies exist. The same message generator is used to generate both generic and relational explanations. (Notice that the same queuing mechanism is used to explain subsequent terms in both generic and relational explanations.) The difference between generic and relational explanations results from the fact that there are two different tyoes of links in XSEL's memory, qeneric links and relational links. Both types of links establish -~ connectton between two or more facts. The difference is that generic links are ~lways unnamed, binary links, whereas relational links are always named, n.ary links, where the name may be an arithmetic relation such as sum or product, or an abstract relation, such as satisfaction or allocation. Both types of links au'e deposited into 232 ? why q total-disk-space THE VALUE OF 3600 MEGABYTES FOR TOTAL-DISK IS DEPENDENT ON 1424 KILOBYTES FOR TOTAL-APPLICATION-DIS 110592 KILOBYTES FOR PROGRAMMER-DISK.SPAC 2816000 KILOBYTES FOR TOTAL-DATA-FILE.DISK-S 600 KILOBYTES FOR PAGE-AND-SWAP-SPACE AND 25600 KILOBYTES FOR SYSTEM-DISK-SPACE. THE VALUE OF 25600 KILOBYTES FOR SYSTEM.DIS IS DEPENDENT ON VMS FOR OPERATING-SYSTEM . THE VALUE OF 600 KILOBYTES FOR PAGE-AND-SW IS DEPENDENT ON 200 KILOBYTES FOR CODE-SIZE. THE VALUE OF 2816000 KILOBYTES FOR TOTAL-DA IS DEPENDENT ON 2816000 KILOBYTES FOR DATA-FILE.DISK.SPAC THE VALUE OF 110592 KILOBYTES FOR PROGRAM IS DEPENDENT ON 2048 KILOBYTES FOR EDITOR-DISK-SIZE, 2816000 KILOBYTES FOR LARGEST-DATA.FILE, 4 PROGRAMMERS FOR NUMBER-OF.PROGRAMME AND 102400 KILOBYTES FOR LANGUAGE.USE.DISK THE VALUE OF 1424 KILOBYTES FOR TOTAL.APPLI IS DEPENDENT ON 1024 KILOBYTES FOR SOFTWARE-DEDICATED. AND 150 KILOBYTES FOR APPLICATION.DISK-SPAC Rgu re 3-1: Sample Generic Explanation XSEL's working memory by the re;lsoning rules that fire during program execution. As links are de;)osited during XSEL's execution, two dynamically growing networks are built up; the generic network is a sim0le dependency network, and the relational network is an augmented semantic network. These networks are the mare source of knowledge for the link- dependent message generators. A generic link is a very sJmple memory element consisting of only two attributes, a source attribute and a sink attribute. The value of the source attribute is the token (i.e., unique identifier) of some fact that entered into the inference of the resultant fact; the value of the sink attribute is the token of the resultant fact. For example, the rules that fire to infer a value for the fact total-disk- 233 space will deposit into working memory at lea.st five generic links, each having the token of the fact total-disk-space in its sink attribute and each having the token of a fact that entered into the calculation of the value for total-disk-space, such aS total- application-disk-space, programmer-disk-space, etc., in its source attribute. An example of a generic link is shown in Figure 3-2. A relational link is a sJightly richer memory element which not only names the relation that holds between two or more facts, but also categorizes it. Figure 3-3 displays one arithmetic relational link and one abstract relation link. (generic.link tsource <total-application-disk-space-token> tsink <total-disk.space-token> ) Figure 3-2: Sample Generic Link (relational- link trelation sum tcategory arithmetic tsmk <total-disk-space-token> tsourcet <total-user-disk-space-token> tsource2 <sum-of- System-disk-space- token> tSOurce3 <sum-of-page-and-swap-space-token> ) (relational-link ~retation satisfaction ¢category reason tsink <quantity-of-disks-token> tsource <total-disk-space- token> ) Figure 3-3: Sample Arithmetic and Abstract Relational Links The network formed by relational links is in some ;)laces more dense and in other ;)laces less dense than the network formed by genenc links; arithmetic relational links create more levels thus making the relaUonal network denser, while abstract links tend to bridge long chains of facts, thus making the network sparser. To see this distinction, consider the arithmetic formula used by XSEL to calculate the total-disk-space requirement: total-disk.space = ( (total. application -disk. space + programmer-disk-space ÷ total-data- file-disk- space) * 125%) + sum of system.disk.space + sum of page-and.swap-space The rules that execute this formula create at least five generic links linking total-disk.space to total-application-disk-space, programmer-disk-space, total-data-file-disk-space, one or more system-disk-sp,3ce facts, and one or more page-and-swap-space facts. At the same time they create one relational link linking total-disk-space to three new intermediate level facts, total-user- disk-space, sum.of-system-disk-space, and sum-of-page-and- swap.space, and they create additional relational links linking each of the intermediate facts to their subfacts. Total.user-disk- space is a newly created intermediate fact, and a relational link, with rrelation percent, is created linking it to two more new intermediate facts, user-disk-space and percent.for-expansion. Another relational link is in turn created linking user-disk-space to the three facts total-application-disk-space, programmer-disk- space, and total-data-file-disk-space. On the other hand, the rules that determine how many RA60 disk drives are needed, for example, create a dense generic network linking all the facts that enter into the calculation of total- disk-space to the facts that allocate some portion of that amount to fixed-disk-space. From there the network would get even denser as fixed-disk-space is linked tO the fixed.disk.unit. capabihty and quantity-of-fixed-disks facts for each candidate. In fact, these generic links are not currently created due to limitations of working memory space. In contrast to the potentially dense generic network, the relational network contains only a few abstract relation links, such as satisfaction and allocation links, that bridge many of the generic links, thus resulting in a sparser network (and in more direct explanations). There are good reasons for the existence of two complete networks. Essentially, the tradeoff is that while generic links are trivial tO create, they do not facilitate satisfying explanations. On the other hand, the creation of relatil)nal links often requires manual intervention, lout relational links facilitate direct explanations. Compare again the generic explanation in Figure 3- I to its corresponding relational explanation in Figure 1.4. Generic links require little effort to create because they simply incorporate the tokens of the facts that are used in an inference 234 rule. In fact, an automatic rule generator was developed for automatically creating most of XSEL's backward.chaining fact- gathering rules from simple arithmetic formulas such as the formula for total-disk-spsce discussed above.lit was a trivial task to have the automatic rule generator include the actions required to have the inference rules create the generic links. The task of augmenting the fact-gathering rules to create arithmetic relational links was also automatable, for the most part. An automatic link-creator was written to parse the arithmetic formulas that were input to the rule generator and create the appropriate links. This parser identified the main arithmetic operations, created names for intermediate facts, and modified XSEL's rules to have them create the arithmetic relational links. The output of the automatic link-creator required only minor manual retouching in those cases where its heuristics for creating names for intermediate facts fell short. 2 But the task of augmenting the component selection rules to create the abstract relational links between facts has so far resisted an automatic solution. These links are now being added manually. They require the effort of someone who understands the workings of XSEL and recognizes what explanations might be called for and. consequently, which rules should be modified to create relational links. 3.2. Overview of Processing The processing of a query by a link-dependent message generator goes as follows. When the initial query is input, a query-interpretation context is entered. In this context some rules fire tO identify and locate the fact in question, to create a query-term with the same token as the fact. and to place that query-term in the query-queue. Following query-interpretation, a message generation cycle consisting roughly of the following five steps reiterates: 1) focus on the next query-term in the queue, 2) locate the links related to that query-term, 3) select an explanation schema 3 based on the links found, 4) create 1XSEL's automatic ride gammer was v~ten by Samly Marcus. 2XSEL's auSommic link-creatm ~S vmtmen by kTr.ttaet ~w~ additional query-terms and messages suggested by the selected schema, and 5) turn control over to the surface generator. Each time a new query-term is created, queue-control rules decide whether to place it in the query-queue, depending on such factors as whether the term has already been explained and how many levels of explanation the user has requested. As long as the query-queue is not empty, the message generation cycle is reiterated. When the message generator is in generic mode, it b constrained to locating generic links during step 2 of the cycle, and it is constrained to selecting the generic schema during step 3 of the cycle. A simplified version of the generic schema is depicted in Figure 3.4. The first directive of the generic schema (Schema-directives::Generic-schema (make goal tgoal-name create.extra.query.terms tstatus reiterate) (make goal Tgoal-name create-message tgredicate IS-DEPENDENT ~erml <current-focus>) (make goal rgoal-narne create-message ?predicate ON ~terml <link.focus> tstatus reiterate) ) Figure 3-4: The Generic Schema directs the message generator to create additional query.terms for all the facts that are linked to the current query-term. The second directive directs the message generator to create one message with the predicate "IS-DEPENDENT" and with the focus-token of term1, which is the current query.term. The surface realization of this message will be the clause "THE VALUE OF 3600 MEGABYTES FOR TOTAL-DISK-SPACE IS DEPENDENT ". The third directive of the generic schema directs the message generator to create one additional message with the predicate "ON" and the focus.token of terror for each of the link terms found. These messages will emerge as prepositional phrases in their surface form, such as " ON 1424 KILOBYTES FOR TOTAL-APPLICATION.DISK.SPACE, 110592 KILOBYTES 3'The term so/letup wls adOl~ed fRmt ~e ~ of McKeown(11), ~ simdet smBclu=~s f~ discou~e o¢~=anizatlo~. FOR PROGRAMMER.DISK.SPACE , 2816000 KILOBYTES FOR TOTAL-DATA.FILE.DISK.SPACE , 600 KILOBYTES FOR PAGE. AND-SWAP-SPACE AND 25600 KILOBYTES FOR SYSTEM-DISK- SPACE ." When the message generator is in relational mode, it is constrained to locating relational links and using relational schemas. There are a variety of each. Currently, relational links are categorized as being either reasons, elaborations, or arithmetic links. During step 2 of the message-generation cycle, the message generator searches first for reason links, next for elaboration links, and finally for arithmetic links. In some cases, the search for arithmetic links may be suppressed. For example, some links whose relation is allocation are subcategorized as being arithmetic operations, as in "75 percent of the total.disk. space requirement was allocated to removable-pack disks". In these cases, expressing the arithmetic relation also would be redundant. When a relational link is located, a corresponding schema is selected. In contrast to the single generic schema, there are a variety of arithmetic and abstract relational ~chemas. Figure 3-5 illustrates the arithmetic "plus" schema that was used to generate the messages for the first paragraph of the "why quantity totaJ-disk-space" relational explanation shown in Figure 1-4. It contains five directives, one to create the new query-terms found in the arithmetic reasoning trace and four to create messages. The second message creation directive will create as many messages as are needed to account for at least 80 percent of the total value of the fact being explained. (The 80 percent factor was implemented in order to filter out insignificant facts, thus making the explanation more concise. Another process that contributes to more readable explanations is the conversion of all units in different clauses of the explanation to the same highest common denominator, eg. megabytes.) Following that, two additional messages will be created, one to mention that the remainder of the total is accounted for by other terms, and another to give an example. Figure 3-6 illustrates the "setisfactJon" schema that was u~=d 235 (Schema-directives:plus-schema (make goal tgoal-name create-extra-query.terms ~status reiterate) (make goal tgoal-name create- message tfocus-token <token I > tpredicate CAPACITY.REQUIREMENT tsubname RECOMMENDED) (make goal tgoal.name create-messege tfocus-token new ?predicate CAPACITY-REQUIREMENT ~ubname GENERAL tamount 80) (make goal tgoal-name create-message tpredicate REMAINDER) (make goal tgoal.name Create-message tfocus.token new tpredicate EXAMPLE) ) Figure 3-5: Sample Arithmetic Schema to create the massages for the first sentence of the "why quantity RA60" explanation shown in Figure 1-5. It contains one directive to create an extra query-term matching the token of the new term identified in the "satisfaction" link, and three actions making the three messages which surface as three clauses of text in the explanation. 4. Rationale The knowledge structures just described, including mas=mge~ query.terms, the query-queue, schemas and links, serve as intermediate structures between the reasoning knowledge of the expert system and the linguistic knowledge needed for language generation .4 Some of the terminology used to describe these structures, e.g., "reason" and "elaboration" relations, is derived from the work of Mann [7] and Hobbs[3] on discourse organization. Mann and Hobbs independently postulate that discourse relations, such as reason and elaboration relations among others, are rasDonsible for coherence in well-organized (Schema.directives:satisfy-schema (make goal tgoal-name create-extra.query-term Hocus-token <term2>) (make goal tgoal.narne create-message ?predicate QUANTITY.SELECTED tterml <term1>)" (make goal ?goal.name create-message tpredicate INORDER 1"retype relational-prop) (make goal tgoal-narne create-message ?predicate CAPACITY.REQUIREMENT tsubncme SATISFY tterm2 <term?.>) Figure 3-6: Sample Satisfaction Schema natural language text. One of the premises of this work on explanation generation is that the relations, or links, that are embodied in the inference rules of a successful reasoning system are the same ones that give coherence to natural language explanations. An immediate goal of this research is to identity those relations. At the present time only twenty.six different reasoning relations, have been identified in XSEL. As more types of reasoning relations are identified and corresponding links are added to XSEL's rules, more of XSEL's reasoning will be explainable. A long term goal of this work is to continue to identify and add reasoning links and schemas until we see some generalities begin to emerge. Perhaps some domain- independent set of reasoning relations and schemas might be found. Furthermore. such relations and schemas might facilitate the design of a knowledge acquisition system that would elicit knowledge from an expert, represent it as relations, and generate inference rules from relations. We realize that this could be a very long term goal, but it aJse has the short term benefit of providing useful explanations. 4~ ~ld~ [91 for another ~ ot intermediate 236 Acknowledgements Many people at CMU and DEC have contributed to the development of XSEL. Some of these include John McDermott, Tianran Wang, and Kim Smith who developed XSEL's sizing and selection knowledge; Robert Schnelbach and Michael Browne who worked on explanation facilities; Sandy Marcus, who wrote XSEL's rule generator;, George Wood, Jim Park, and Mike Harmon who provided technical support; and Dan Offutt who is extending XSEL's sizing knowledge with a view towards developing knowledge acquisition facilities. 10. John McDermott. Building Expert Systems. Proceedings of the 1983 NYU Symposium on Artificial Intelligence Applications for Business, New York Univer~ty, New York City, April 198,3. 11. Kathleen Rose McKeown. Generating Natural Language Text in Response to Questions about Database Structure. Ph.D. Th., University of Pennsylvania Computer and Information Science Department, 1982. 12. William R. Swartout. "XPLAIN: a System for Creating and Explaining Expert Consulting Programs". Artificial Intelligence 27 (198,3), 285-325. References 1. R. Davis. Applications of meta level knowledge to the construction, maintenance, and use of large knowledge bases. Ph.D. Th., Stanford University, 1976. Stanford Artificial Intelligence L~oratory Memo 283, Stanford, CA. 2. C. L Forgy. OPS.5 User's Manual. CMU.CS-81-135, Dept of Computer Science, Carnegie-Mellon University, Pittsburgh, PA 15213, July 1981. 3. Jerry R. Hobbs. Towards an Understanding of Coherence in Discourse. In W. G. Lehnert and M. H. Ringle, Ed., Strategies for Natural Language Processing, Lawrence Erlbaum Associates, New Jersey, 1982, pp. 223-24,3. 4. Gary Kahn, Steve Now/an, and John McDermott. A Foundation for Knowledge Acquisition. Proceedings of the IEEE Workshop on Principles of Knowledge.Based Systems, IEEE, Denver, CO, 1984, pp.. 5. Gary Kahn and David Gelier. MEX: An OPS-based approach to explanation. 1984. 6. Karan Kukich, John McDermott and Tianran Wang. XSEL as Knowledge Acquirer and Explainer. 1985. 7. William C. Mann and Sandra A. Thompson. Relational Propositions in Discourse. 198,3. 8. Sandra L. Marcus, John McDermott and Tianran Wang. A Knowledge Acquisition System for VT. Proceedings of the AAAI, AAAI, Los Angeles, CA, 1985, pp.. 9. Michael Mauldin. Semantic Rule Based Text Generation. Proceedings of the lOth International Conference on Compu=ational Linguistic~ ACL, Stanford University, Stanford, CA, 2-6 July 1984, pp. 376-380. 237
1985
28
DESCRIPTION STR.ATEGIE.S FOR NAIVE AND EXPERT USERS C~cile L. Paris Department of Computer Science Columbia University New York, NY 10027 Abstract It is widely recognized that a question-answerlng system should be able to tailor its answers to the user. One of the dimensions Mong which thus tailoring can occur is with respect to the level of knowledge of a user about a domain. In particular, responses should be different depending on whether they are addressed to ns/ve or expert users. To understand what those differences should be, we a~alyzed texts from adult and iunior encyclopedias. We found that two different strategies were used in describing complex physical obiects to juniors and adults. We show how these strategies have been implemented on a test database. INTRODUCTION Whether the purpose of a natural language program Ls to ease man-machine interactions [Kaplan 82; Hayes and Reddy 79] or to model human communication ~Lehnert 781, it must take into conslder~tion certain characteristics of the person engaged in the interaction. [n an interaction between people, the goals, beliefs, retentions, knowledge and past experience of the participants will play a role in how they communicate with each other [Cohen and Perrault 791, [Perrault and Allen 80[. Similarly, those characteristics should play a role in the way a computer system interacts with a user. In particular, a questlon-answering program that provides access to a large amount of data to many different users will be most useful if it can tailor its answers to each user. We are interested here in how the level of knowledge (or expertise) of the user a~fects an answer. As an example of this kind of tailoring in a naturally occurring conversation, an explanation of how a car engine works a~med a~ ~ child wdl be different than one ~med ~tt an adult, and an explanation adequate for a music student is probably not quite sufficient for a student in mechanic~l engineering. [n this paper, we study the strategies used tn natural language to describe physical objects to two different types of users: naive and expert. By naive ~nd expert, we refer to how familiar a user m about the domain of the database as opposed to how experienced the user is with the question/aJnswering system. When the database ts complex, it becomes important to vary the level and the kind of details included in the answer in order to provide an answer that caa be best understood by the user. We plan to use this distinction in the question- answering program for RESEARCHER, a system being developed at Columbia University. RESEARCHER reads, remembers, and generalizes from patents abstracts written in English [Lebowitz 83]. The abstracts descrlbe complex physical objects in which spatial and functional relations are important. Thus, we are interested in characterizing spatial strategies that can be used for experts and novices about certmn physical obiects. We give deta41s in the paper of the current implementation of description strategies on a test database of object descriptions. OUR DOMAIN Our goal is to characterize some of the strategies employed to describe complex physical objects and see whether these strategles are different for naive and experts users. To investigate thus problem, we have looked at texts from encyclopedias {botih adult and junior) and high school physics textbooks ~. The texts we have studied are about physlc~l objects performing a function (such as telephoues and telescopes), and generally do not exceed several paragraphs in length. These texts make the distinction between na4ve and expert readers ~nd have been widely used for a number of years for those audiences. They provide examples of different descriptive strategies that actually occur in natural language. Thus, a question-answering system should be able to reproduce them-'. Studying texts from encyclopedias gives us the advantage of being able to compare descriptions of identical objects aimed at two distinct audiences. On the average, a younger audience has had less opportunity to gather experience and knowledge about any. particular doma4n. Thus a younger audience as a -whole is more naive about ~ domain than an adult audience. Hence, these texts give us a good starting point for studying the differences between the IWe studied about fifteen examples from each encyclopedia and textbook. 2Our goal however, is not to study, how effective these tex~s ~re for different human rea(iers. If further psychologlcal research shows that other distinctions ~.re • ppropriate, they could be . incorpoTated.. . The dustinction based on encyclppedias and textbooks is • really the only available at thls point. 238 1) The hand-sets introduced in 1947 consist of a receiver and A transmitter in a single housing available in ,black or colored plustic. 2) The transmitter diaphragm is clamped rigidly at its edges 3) to improve the high frequency response. 4) The diaphragm is coupled to a doubly resonant system 5) -s cavity and an air chamber- 0) which broadens the response. 7) The carbon chamber contains carbon granules, 8) the contact resistance of which is varied by the diaphragm's vibration. g) The receiver includes a ring-shaped magnet system around a coil and a ring shaped armature of anadium Permendur. 10) Current in the coil makes the armature vibrate in the air gap. 11) An attached phenolic-impregnated fabric diaphragm, shaped like a dome, 12) vibrates and sets the air in the canal of the ear in motion. I. Constituency Depth.attributive for the tronamitt~ Depth-attributive far the receiver (Description of the trQnamitt~) (Description o[ the receiver) 2. Depth-Attributive 9. Depth-Attributive 3. Cause-effect 10. Cause-effect 4. Depth-Attributive 11. Attributive 5. Depth-identiflcation 12. Ca-,e-effect 8. Cause-effect 7. Depth-Attributive 8. Cause-effect F|gure is Constituency Schema Example aescnpttons given to naive users and those glven to experts in the domain. To minimize the effects of styiistlc differences on our results, we chose texts from several different encyclopedias in each audience category. THE TEXTUAL ANALYSIS We began by analyzing the different texts using methods developed by other researchers ( [Hobbs 78a), [Hobbs 80l, [Mann 84], [McKeown 82]) we decomposed paragraphs in terms of their pmmitwe rhetorical structure ia an attempt to find a consistent structure tn each group of texts. The analy.~s showed the adult encyclopedia descriptions to be mainly m terms of the sub-parts of the object being descrlbed These texts can be characterized by one of the textual structures posited tn [McKe0wn 82], the constituency schema. This structure is presented m the next section. On the other hand, no schema or other organizing structure consistently accounted for the descrlptmns m the junior encyclopedia texts In looking for other types of organizing srrategles, we discovered that the ma~n strategy m descrlbmg cblects to a naive user is to trace through the process that allows the obiect to perform Lts function. Strategy for the Adults The descriptions from the adult encyclopedias tend to follow the pattern estabhshed by the constltuency schema, one of the textual structures defined m [McKeown 82[. In her work on natural language generation, McKeown studied the problems of what to say and how to organize text coherently. She examined texts and transcripts, classifying _ each sentence as one of a set of rhetorical predicates 3 and found that some comblnatmns of predicates were more likely, to occur than others. Moreover. for each discourse sltuation, some combination would be the most appropriate one. Those standard combinations were encoded as schemas which axe associated wlth a particular dLscourse situation. One of these schemas is the constltuency schema which is used to descrlbe an object (or concept) m terms of its subparts and their properties. The constituency schema is shown below ~ (For a given entity, Constituency LS the description of its sub-parts or sub-types, and the attributive predicate glees properties associated with it.) 3Rhetorical predicates characterlze the structural purpose of a sentence and have been discussed b~" a vamety of linguists [Grimes 75] fHobbs 78b| S6me examples are constituency (describtlon of sub-parts or sub-types), attributive (providing detad about an entity or event) and analafy (-the making of an analogy). 4We have altered McKeown's constituency schema slightly by making the first predicate optionkl Instead. of mandatory: in the texts studied, the main parts o{ the object were not necessarily immediately lis~ed. We ,~,e using McKeown's notation:" {}" mdicatd optlonality, 'p indicates alternatives, "÷" indicates that the item may appear 1-n times, and "*~ indicates that the item may appear 0-n times. Finally, ";" is used to represent clszsificatlon of ambiguous propositions. 239 =_ L l)W~en one speaks Into the transmitter of a modetqt telephone, these sound waves strike a~galnst an aluminium diak or diaphra~n and eause it to vibrate back and forth In Just the ~me way the molecules of air m~ vibrating. IT. 2) The center o/ thin diaphra~nn ia connected with the carbon button originally invented b~ Thomas A. Edieon. 3) Thi~ ie a little buns, bo: filled with granules o/carbon composed of eapeciall;; ~dccted and treated coM. 4) The front end back of the button are inaulntcd. Ill. 5) The talking eu~ent Is paNed through this box so that the eleetrlelty must find Its way from gs~nule to g~qmule luside the box. 6) When the diaphragm moves Inwm~l under the pressure f~om the sound waves the e~bon g~dn~ are pushed together and the eleetrlelty finds an ea~le~ path. ~) Thus s strong eurtent flows through the line. 8) When a thin im~t|on of the sound wave comes along, the diaphragm sprln~ back, allowing the e~bon pm'tleles to be mote loosely packed, and eonsequently less eu~.ent can find Its way through. g) So s varying or undulating current Is sen~ ove~ the line whuse vibrations exactly eorr~pond to the vibrations caused by the speaker's volee. 10) ThIs euetent then flows through the line to the colic of an elcctromafnet in the receieer. IV. 11) Ve~ near to the pa/u of thie magnet i, a thin iron di~e V 12) When the eurtent becomes stt, onge~ it pulls the disc toward it. 13) As s weaker eur~ent flows through the resigner, It Is not strong enough to att~set the dlsk sad It springs back. 14) Thus the dlaphrs~m In the receiver Is made to vibrate In and out .... Flgute 2: Text from ,~ junior encyelopedi,~ Constituency Schema {Constituency} Cause-effect ] Attributlve* { Depth-ldentlficatlon ] Depth-attrlbutlve { P~rtlcular lllustratlon / Evldence} { Comparison , Analogy} }+ { Amplificatlon / Explanation / Attributive [ Analogy } Consider for example the descnptlon of a telephone from an, adult encyclopedia [Colher 62] shown in Fzgure 1 ~. In the first sentence, the telephone is described In terms of its constltuency (or sub-p~xts}: the transmitter, the receiver and the housing From s~ntence 2 to 8, attributive reformation (or properties) ~s well as functlonM Info~matlon (cause- effect) about the transmltter axe glven ~ Finally, the recelver ~n turn ~s described from sentence 9 to 12, uslng both attributive and c~use-efrect information. SFor clarity, the original one paragraph text has been divided mto three paragraphs. SThe reader who is familiar with this type of ~nalysm will note that several properties bf the transmitter are in turn identified and described uslng attributive reformation which is a form of schema regnrs|on, Entries in the junior encyelopedla ~nd hlgh school text books In texts aimed toward younger audiences, an object is m~nly described in terms of the functions of its parts. The description traces through the process reformation instead of an enumeration of its sub-parts, • s is usuMly the case in the adult descriptions. The p~rts are mentloned only when they" need to be, that is, when the descnption of the mechanical process calls for them. As an example of this phenomenon, consider the description of a telephone show.n tn Figure 2, taken thls tIJne from the encyclopedia lunior [Bntanmc~-Junior 6,3]' : We see that the theme of this text is the mechanlcM process description shown in bold face. That process descnptlon gets interrupted when descnptlve informatlon can be included concerning sub-paxt that was just mentioned as part of the process descnption. Such information Is shown zn indented it~lics in the example. Furthermore, we see that, in the junior encyclopedla, not only ts the description made mainly through a process trace, but there are no large gaps in 7the original entrv contalned the two paragraphs. The second one has been dlvlded for clarity 240 ; Description of the TELEFHOKE btaed on the Constituency uchens. .u~wY axe the unique identifiers fur the object frtae8. i The Constituency gchen& vu filled by eteppxng throuKh in ATH. tasvor : ~ (TE.EPHO~) I elDENTIFICATIONe (VARIART-OF: DEVICE#)) • CONSTITU~CT, (/~i]2 (l'RA~S311t~..~)) (~w~t6 (HOUSIMG)) (~mrutS (LINE)) (Jem:'~t7 (RECF.IVER))) The telephone is • device. It consists of t traflenittUro • houaing. • line tad n ruceiTer. (7R.~I~) ; The tranntttur t8 I ,IDE~TIFICATION* (VABIAKr-0F: "fIUtl~MITl'~8)) ; • kind of traamLttter. 8COMSTITUENCTe (~8 (DOU~LT-RESONA~'r-S'fS'r~)): It h "~ • doubly (J~13 (DIAPHRWm-T))) i /dlnil6 ~HOUSING) t~e housing is (e[D~rrlFICATIONe (VARIAFI*-0F: COVERS)) ; • type of cover: (,@NSTITUENCY*) fdf~f5 (LINE) ; the line is • rite; I eIDEFI'IFICATIONe (VkI~IAFr-0F: lll~#)) *CONSTITUE~CT=) • u~r~17 (RECEIVe3) *IDENTIFICATION8 (VARIAFr-OF: RECEIVIng)) • CONSTII"UENCT* (~ME]i22 (DIAPHRAGM-T)) (&~21 (AIR-GAP)) (~v~18 (F.LEC~OMAG~:'r))) The receiver te kind of receiver. It ¢ou8i8~8 of • dl&phr~pt. ~ sir ~tp -~d ,~ electronwrnet. Figure 3s Printout of the Constituency Schem~ Example the chain of references Almost everyttung is spelled out. Consader the third paragraph of the text glven ~bove where every step s explained: "The talking current is passed through this box SO THAT the electricity must find its way FROM GRANULE TO GRANULE inslde the box." From there, the writer goes on to explaan how the electricity passes through the carbon box, once again stepping through the process, spelhng out the consequences of e~ch step: "When the diaphragm moves inward under the pressure f~m the sound waves the carbon grains are pushed together and the electricity finds an easier path. THUS a strong current flows through the line." Contrast this detmled procpcr~s descnptmn with the descriptmn given for an adult": "The carbon chamber contains carbon granules, the contact resistance of which is varied by the dlaphragm's v|bration'. Other differences occurred between the jumor and adult entries as well. In general, more vlsual tnformatlon was included m the text for the junior, so as to render the description more vlvld. For exampl e, the carbon button in the telephone descnptlon Is described as "a !ittle brass box filled wlth carbon of especlally selected and treated coal" m the junior 8This excerpt is taken from an adult encyclopedia. encyclopedia, in contrast to "the carbon chamber contains granules" m the adult encyclopedia, similarly, the junior entry for light bulbs describes a filament as a "fine run.ten filament wound m very small coils", whereas the adult encyclopedi~ mentlons only "~ coded tungsten filament." Another malor difference was that the lumor encyciopedi~ texts had a higher degree of redundancy while the adult encyclopedia ones were quite concise We refer to the jumor telephone example again to illustrate this point: sentences $ and 6 explained how the electnclty Is made to flow easily through the box Sentence 7 xs a recapttulatlon of that phenomenon. Finally, sentence 8 explains the reverse effect Finally, we observed that expository style and vocabulary differed considerably m the two types of texts studied. Future research will attempt to characterize these phenomena. COMPUTATIONAL USE OF TIlE STRATEGIES The strategies are currently ~mplemented on ~ test database composed of oblect descriptions from the encyclopedias. The representation of an object thus contains all the reformation included for that part:cular oblect m both encyclopedl~. The two 241 Tan process inforlation gets picked up tad printed out for t naive user. are the unique identifiers to the fruen corresponding to the nets-relations the program is ~racing. *(print-process (ge~-procen 'J".eml] tREI.3 (P-SPEAKS-INTO) : ; Ihen one speaks %nee t.he objectSUbject : (~liEi127)(tliE~) [TRANSXITT~][OME] ; ~raaalLitter of a ~elephonu, ~ /d~O (M-CAUSES} IREL4 (P-HITS) : subject,: (/t]l~28) [SOUNOIAVF.~] objec~ (JM~i3) [DIAPHRAGM* T] ~REL4 (P-HITS): ; Thin ¢aunen subject: (/tI~128) [SOtrNDIAVF..~] ; object (adCE~) [D IAM~tG]I-T] m~ /l~l (M-CXUSE$} tREI.fi (P-VIBRAT'r~) ; the ditphra4Sm to vibrae-e cub j ect object (grief3) [DI~GII-T] I~EL5 (P-VIBI~IT.$) subject. object. (/tiiE~) [DIAPHRACII-T] :=~> /Ida2 (M-EIIUIVALENT-TO} ; in the s~ute manner am /d~L8 (P-¥1BRAT'r..~) ; the molecules of ear eubj oct : vibr~tia K. ob] oct, (IM]~2S) UtlR-IIO~] ; the sound vavee bi~ ; ~-~e 4iapbr~4p| of the ~ransmit~er. Flfure 4: Printout of the Process Tr~ce strategies presented dlctate what informatmn to snclude from the knowledge base, based on the constituency schema _or the process trace ~ shown in Figures 3, 4 and SY Knowledge-based rep~seutat|on We use a frame-based knowledge representatmn - [Wasserman and Lebowitz 83; Wasserman 85} m which the basic frame represents an oblect These structures are the entitles in a generalizatmn hierarchy In additmn to the generalization, or instance-of links, there exist two additional kinds of links ioming entlties: part-of links, which indicate an entlty is a part of a larger structure, and relations, whlch convey mformatmn about spatlal or functional reiattonshlps Finally, there ,~re causal links between relations called meta.relations. 9Further work is needed to fully implement the schema predicates and add more descnptlve mlormatlon Implementlon of the adult encyclopedia strategy For an adult, the program {~ls the constituency schema, ~ shown In Figure 3An_ The predicates contained m the schema define the type of mformatmn to be taken from the database. The figure shows the final output. The entities are represented by thelr unique identlfier &MENLX:, and the predicates are the starred items (e.g. *IDENTIFICATION'). The hypothetical english output is included in the comments. The identification predicate represents the more general concept of which the present ob|ect ts ~n mstance Because the test database mcludes only the mformatmn contained In the texts read, the hierarchy may not be complete for all objects. As ~n example, a transmitter was never defmed m terms of a more general device, and thus has no super-ordm~te The constituency predicate gives the components of ~a entity, if there are any lOSes [McKeown 801 for details of ~ stmdar system. 242 " i nov the pro~rta taken each relation which can be dirlded into subnteps and ~racen ~hrough that each step. An this case, aBF.LS (P-VIBRATES) can be broken up into aBELS (P-MOVESoFOR|ARD) and aBEL7 (P-MOVES-BACAIARD). aBEL18 (P-INCREASES}: ; The increased sound yarns subject ; : intensity object (/~I~128) [$OU~DIIAVE- I FINNS l I'T] m> ~U~3 (M-CAUSES} ; cannes aBEL8 (P-MOV'r.3-FORUARD): ; the diLphrq;m subject : ; to Boys forv~rd objeo~ ( ~ ) [DIAFHRXQ/-T] aBEL8 (P-IfOVES-FORUARD) : subiect : objec~ (aBe3) [DI~OtAGX-T] m> 4dm4 {M'~US~5) ; vhtCh causes aBE~28 (P..COMPRF.SSF.S) : : the ~rl~lule8 £a the carbon chamber to be conpreased. subject : object (J*Vl~l 2) [GRANULE] aBEL2S (P-C~MFR£SST~) : subject : object (~i]~112) [GRMIUt~] m> fd~S {M-CAU~£S) ; A8 a rasult. 8REL22 (P-OECREASES): ; their contact resis~snce subject : : decreases. object (*re:lit3) [CONTACT-RESI ST~CE] aB[1,22 (P-OEC~EASF.3) : subject : objec~ (,I~E]i13) [COFFACT - RF..S l S'L4JIC~ =~> #J~8 (]I-CXUS'r~} ; -,,d cannen aBEL24 (P-INCRF./i3~): ; the curren~ to increase. subject : object (/rMEM31) [CU?~I~I'- I h'l'l~S l TI' ] i The prosrus trace8 throug~ in the same manner for each relation ~avin~ substeps. FIKu~e $, Printou~ of the Process Trace (cont'd) Junior encyclopedia strategy For the junior, the strategy dictates to fol!ow the cause-effects links in the knowledge b~se ,n order t,o trace the process. In our representatlon, th~se causual links are named meta-relauons (In the figure, they are represented by the Identlflers &:MRX. &RELX correspond to the reiauons, l e the spatlal or funcUonal l,nks between entltles ). The program traces through the meta-relatlons, ptcklnK the process informatlon as shown m Fisure 4. When .~ relatlon can be broken into substeps, the program then traces through those sub-steps (see Figure S). Future Work There axe severM theoreticM msues that need to be addressed. In our test dat~ba.se, the problem of declding m what order relations occur does not arise. However, for an arbitrary database, knowmg where to begs describing a process may be more difficult Simllaxly, the process may not be as sequential ~s the ones we examined so fax, and, as a result, we plan on further study of how to organize the informaUon. Furthermore, in our test database, we don't need tc conszder how deep into the substeps the process description should go, but this Issue exists for an arbitraxy database. Finally, we have looked at the two ends of a spectrum (n~ve and expert), but, for users not at either of these ends, we must consider how to combine these strategies. 243 We have started to address the problem of generating natural language for the descriptions. We have begun the augmentation of an English surface generator ]McKeown 82] that, using • functional grammar [Kay 79], takes the output of the textual component to translate it into English sentences "'. However, how this program may interface with the strategies remains to be studied. CONCLUSION It is important to tailor a system's response to the level of expertme of the user. By studying texts aimed at two different levels of readers, we have found that two different strategies were used in describing physical objects, depending on whether the description was for an adult or for a junior. For an adult, an object is described with its sub-parts and their properties; for a junior, the description traces through the mechanical process which renders the object functional. The two strategies presented account for the mare differences found between the adult and jumor entries. This turns out not to consist of merely glving more details for the expert ~ m often thought [Wallis and Shortliffe 82]. [n the adult entries, details given are mainly about the sub-parts and thelr properties and less about the mechanical process involved. When the process mechanism is mentioned at all, it is done very briefly. In the iumor entries, process mechanism m more important than sub-parts and given in more detail. Parts are introduced either alter or at the same time as their function is defined, and, as a consequence, are always defined when presented. Furthermore, since the process mechanism follows every step of the causal chain, descriptions for the novice tend to include more detail about functional reformation than descriptions for the expert. We have shown how formalization of the strategies allows for the development of question-answering systems which can tailor their responses to the user, given his level of expertise about the domaml2 ACKNOWLEDGMENTS We would like to thank Kathy McKeown and Michael Lebowitz for helping in both the research and the writing of this paper. This research was supported in part by the Defense Advanced Research Projects Agency under contract N00039-84-C-0165. llDetermmmg the level of expertise of the user is another research problem which we have been studying ( [Paris 84]) 12Determtmng the level of expertise of the user is another researc~i problem which we have been studying ( [Paris 84]). References [Britannica~ Junior 63] Britannica Junior Encyclopedia Encyclopedia Britannica [ncorparatmn 1963; Wiliam Benton Publisher [Cohen and Perrault 79] Cohen, P. R. and Perrault, C R. Elements of a Plan-B~ed Theory of Speech Acts. Cognitive Science 3:177 - 212, 1979. [Collier 62] Collier's Encyclopedia. The Crowell-Collier Publishing Company 1962; William Halsey editorial director. [Grimes 75] Grimes, J E. The Thread of Discourse. Mouton, The Hague, 1975 [Hayes and Reddy 791 Hayes, P. and Reddy, R. Graceful Interaction m Man-Machine Communicatlon. In Proceedings o/ the IJCAI. lnternatlonal Joint Conferences on Artificial Intelligence, 1979. [Hobbs 78a] Hobbs, J. W~y i8 a Discourse Coherent'/. Techmcal Report 176, SRI International, 1978. [Hobbs 78b] Hobbs, J. Coherence and Coreference. Technical Note 168, SRI International, 1978 Menlo Park, California. [Hobbs 80] Hobbs, J. and Evans, D Conversatlon as Planned Behavior Cogniti1:e Science 4(4)349 - 377, 1980 [Kaplan 82] Kaplan, S. J. Cooperative Responses from a Portable Natural Language Query System. Artificial Intelligence 2(19)/ 1982. [Kay 791 Kay, Martin. Functional Grammar In Proceedings of the 5ih meeting of the Berkeley Lin~istics Society. Berkeley Linguistlcs Society, 1979. 244 [Lebowitz 83] [Lehnert 78] [Mann 84] Lebowt~z, M RESEARCHER: An Overview In Proceedings of the Third National Conference on Artificial Intelligence. American Association of Artificial Intelligence, Washington, DC, 1983. Lehnert, W. G. The Process of Question Answering. Lawrence Erlbaum Associates, Hillsdale, N. J., 1978. Mann, WC. Discourse Structure for Tezt Generation. Technical Report ISI/RR-84-1~', Information Sciences Instltute, February, 1984. 4676 Admlralty Way/ Marma del Rey/Cslifornia 90292-6695. [McKeown 82] McKeown, K. Generating Natural Language Tezt in Response to Questions About Database Structure. PhD thesls, University of Pennsylvania May, 1982. Also a Technical report, No MS- CIS-82-05, University of Pennsylvania, 1982. [Pans 84J Parts, C. L. Determtnmg the Level of Expertise. In Proceedings of the First Annual Workshop on Theoretical Issues in Conceptual Information Processing Atlant.% Georgia, 1984 [Perrault ~nd Alien 80] Perrault R. C -~nd Allen J F A Plan-Based Analysts of Indirect Speech Acts. American Journal of Computational Linguistics 6(3-4), 1980. IV/slim and Shortliffe 82] Wallis, J.W. and Shortliffe, EH. Ezplanatory Power for Medical Ezpert Systems: Studies in the Representation of Causal Relationships for Clinical Consultation. Technical Report STAN-CS-82-923, Stanford University, 1982. Heurmtics programming Project. Department of Medecine and Computer Science. [W~sserman 85] Wassermsn, K. Unifying Representation and Generalization: Understanding Hierarchically Structured Objects PhD thesis, Columbia University Department of Computer Science, 1985. [Wasserman and Lebowitz 83] Wasserman, K. and Lebowltz, M. Representing Complex Phystcsl Objects. Cofnition and Brain 77:eory 6(3)3,33 - 352, 1983. _ 245
1985
29
Tense, Aspect and the Cognitive Representation of Time Kenneth Man-kam Yip Artificial Intelligence Laboratory Massachusetts Institute of Technology 545 Technology Square CamOridge, MA 02139. ABSTRACT This paper explores the relationshiDs between a computational meory of temporal representation (as developed by James Alien) and a Iormal linguiStiC theory Of tense (as developed by NorOert Hornstem) and aspect. It aims tO prowde exphcit answers to four lundamental Questions: (1) what ts the computational lustd~cat=on for me or=mmves of a hngu=stIc theory; (2) what ~s the computational explanation of the formal grammatical constraints; (3) what are the processing constraints ~ml3osed on the learnabdity and marKedness of these theoretical construCtS: and (4) what are the constramnts that a hnguist=c theory imposes or. representat¢ons. We show that one can effectively exploit (n~ ,nterface between the language faculty and the cognmve faculties by using hngu=stic constra,nts tO determine restrtcuons on tile cognitive representations and wce versa. Three mare results are cbtalned: (1) We derive an explanation of an oOserved grammabcal constrmnt on tense .. the Linear Order Constraint -- from the reformation monotonicity property of the constraint propagation algorithm of Allen's temPoral system: (2) We formulate a principle of mart~edness for the 13as=c tense structures Ioased on the computational efficiency of the temporal representations: and (3) We snow Allen's interval-Oased temporal System =s not arbitrary, bul it can be used to exolair, ;nctependently motwated lingulst~c constraints on tense and aspect interpretatmns. We also claim that the methodology of research developed in tins study -- "cross-lever' investigation of independently motivated formal grammatical theory and computational moclets -. is a ¢owerful paradigm with which to attack representational problems =n oaslc cognitive domains, e.g.. space, t~me, c~u:~ality, etc. 1. Objectives and Main Results One malor effort m moclern hnguistlcs Is tO hmlt the class of possible grammars to those that are psychologically real. A grammar Is PSyChOlOgiCally/real if it ts (a) realizaole - possessing a computational model that can reproduce certain psychological resource complexity measures, and (b) learnable . capable of Oemg acquired (at least, m principle) despite the poor quality of input linguistic data. A shift of eml3nasis from the pure characterization problem of grammar to the realization and leamability problems naturally bnnga linguistics closer tO AI work in na:ural language understanding Concerned wfth computational models of language use and language acquisition Computational study =Sm principle complementary tO more formal and aOstract grammatical theory. Each should contribute to the other. The purpose of this loader ~s to work Out an example of how formal grammatical meory and computational models can effectively constrain eacn diner s reoresematJons. In ~3artJcular, I seek to exolore four !undamental ~ssues: t. How ~s the cho=ce of onmmve structures m grammatical theory to be lustified? 2. What ~s the explanation of the rules and constramts that have to Oe stiI3ulated at the grammatical level? 3. HOw are these knowledge structures acau~red? 4. What are the theoretical constraints ~moosed by the grammar on the representational scheme of the computation theory? What I hope tO snow is that structures and prmcJoles that have to be sttoulatgG '~t the grammatical level fall out nalurally as consequences of the proDert=es of the algorithms and representations of the underlying comoutahonal model. In sO doing, I will also restnct the class of plausmle computational models tO those that can exclam or incorporate the constraints =m;3osed by the formal grammatical theory. There are a numoer of requirements that must be met m order for such "cross.lever' study to succeed. First, there is a sizable collection of fzcts and data from the target domain to be explained. Second. there =s ,ndeDendent motwauon for the theory of grammar .. =t ~s empmca:ly adequate. And, third, the computational model =s also ,nrJeoendently motivated by ioemg sufhc=ently express=re and computatlonally efficient. With these considerations, I have chosen two domains: (1) tense and (2) aspect. Tense concerns the Chronological ordering Of situations with resnect tO some reference moment, usually the moment of s!3eech. Aspect =S the study of situation types and perspectives from which a particular situation can be viewed or evaluated (cf. Comrie75) The point of departure of this study is two papers: (1) for tl~e theory of tense, Hornstetn's "Towards a theory of Tense" (Homstem77) and (2) tor the cognitive theory of time. James Allen's "°Towarcls a General Theory ot Action and 18 Time" (Allen84). In the following, I shall list the main results of this study: 1. 2. A better theory of tense with revised primitive tense structures and constraints. We derive an exDlanatmn of Hornstein's Linear Order Constraint, an oioserved formal constraint on lingu=stic tense, from propert=es of the constraint propagat=on algorithm of Allen's temporal system. This shows this formal grammatical constraint need not be learned at =1. We also show that the rule of R.germanence follows from the hypothes=s that only the matrix clause anti tl~e suocategortzaDle SCOMP or VCOMP can introduce distract S and R points. Finally, we prove that certain boundedness condition on the flow of mformatmon Of a grocassmg system leads d=rectly to the locality properly of a constraint on secluences of tense. 3. A prmczole of markedness for tense structures based on the comoutat=onal efficiency of the temporal representation. The prmciple pred,cts that (1) of the stx basic tenses m Enghsh, future perfect =s the only marked tense, and (2) the not=on of a dastant future tense, lust like the s=mple future. =s alSO unmarked. A better account of the state/event/process d=st=nct=on based on Allen's interval-based temporal Iogac and the =dea that the progress=ve aspect sl~ec,hes the perspect*ve from wh=ch the truth of a s~tuation is evaluated. An account of theoretical constraints on the representation of hme at the comDutat=onal level, e.g., three distract t=me points are necessary to charactenze an elementary tensed sentence, and the d~stmctmn between instantaneous and non-instantaneous t=me intervals. 2. Tense We begin Dy hrst outhmng Hornstem's theory of tense. In sect=on 2.1. we describe the 13rtmtt,ves and constramnts on tense of h~s theory. In sectzons 2.2 and 2.3. we snow how the 0nmit=ves and constraints can be denved from computat=onal conszderat=ons. 2.1 Revcs,ons to Hornstem's Theory of Tense Hornstem develops a theory of tense w#th#n the Re~cnenbachlan framewcrk whtch postulates three- theoretical entit~es: S (the moment of speech}, R (a relerence point}, and E (the moment of event). The key ~dea =s that certain linear orOenngs of the three t~me I:}o=nts get grammat=cahz.,~l mid the smx bas=c tenses oi Engl,sh. 1 The following ~s the last of basic tense strOctures: 1. SIMPLE PAST E,R_S 2. PAST PERFECT E_.R_S 3. SIMPLE PRESENT S,R,E 4. PRESENT PERFECT E_S.R 5. SIMPLE FUTURE S_R,E 6. FUTURE PERFECT S_E~R The notation here demands some explanation. The underscore symbol "~" is interpreted as the "less-than" relation among time points whereas the comma symbol .... stands for the "teas-than-or-eQual-to" relatmn. As an illustration, the present perfect tense denotes a situation in winch the moment of speech is either cotemporaneous or precedes the reference point, while the moment of event =s strictly before the other two moments. Note that Hornstem also uses the term "assoc=ation" to refer to the comma symbol ",". Geven the bas=c tense structure for a s=mole tensed sentence, the mterpretat=on of the sentence that arises from the interact=on of tense and time adverbs ~s represented by the modihcatmn of the posit=on of the R or E points to form a new tense structure wh=Ch we call a aermeO lense structu,e. In two papers (Hornstem77 & Hornstem81), Hornstem proposes three formal constraints that hmlt the class of derived tense structures that can be generated from the bas=c tense structures m SuCh a way as to capture the acceptabd=ty of sentences containing temporal adverbs (e.g.. now, yesterday, tomorrow), temporal connechves (e.g., when. before, after), and md=rect speech. In the rest of tins sect=on, I shall examine the adeouacy of these constraints. 2.1.1 Linear Order Constraint The Linear Order Constraint (LOC) states that t!~.523-4): (1) The linear order of a clenved tense structure must be the same as the hnear order of the basic structure. (2) NO new assoc=at=on ~s ;roduced =n the clerfved tense structure. LOG IS st=oulated to account for examoles cons=st=ng Of a single temporal adverb such as (4a) and those w~th two hme adverbs such as ~'32). 2 4a. Jonn came home i. "now, at this very moment i. yesterOay iii. "tomorrow 32 a. Jonn left a week ago [from] yesterclay. h. [From] Yestertlay, Jonn left a week ago. c. °A week ago. Jonn left [from] yesterday. The basic tense structure for 4(ai) is: E,RoS (sim[ole past: Jonn came t~ome) NOw modifies E or R so that they become cotemporaneous with ll~e moment of speech S with the clerived tense structure as 1. Hornstem actua=ly ksNid tone ~a~l¢ ter~ Put I *.,gmk U~e Dn~otes3~ve Oo~onQs to tfle Dromnce of asoect fqltrtet flqn te~. 2. The ,num~nnOs are Homstlm~'s. 19 follows: E,R,S (BAD: violates LOC since new association is produced) On the other hand, 4(aii) is acceptable because the modifier yeslerOay leaves the tense structure unchanged: yesterday E,RIS -- E,RIS (OK: does not violate LOC) The crucial example, however, ms 5(c): 3 5c. John has come home i. ?right now i i . "tomorrow iii. yesterday. LOC predicts (wrongly) that 5cii is good and 5ciii bad. 4 But LOC gives the wrong prediction only on the assumotmon that the basic tense structures are correct. To account for 5c. i propose to save the LOC and change the following SRE assocmatmon with the present perfect: PRESENT PERFECT E_R.S With the modified basic tense structure for present perfect. LOC will give the correct analysmS. 5cii =s bad because: romp r row E__R.S -- EIS~R (linear order violated) 5ciii is acceptable since: yesterday E__R.S -- EIR__S (OK: no new linear order and no new comma.) The questmon that naturally arises at this point ms: Why does Hornstein not choose my prooosed SRE structure for the present perfect? The answer, I befieve, will become apparent when we examine Hornste,n's Second constra, nt, 2.1.2 Rule for Temporal Connectives The rule for temporal connectives (RTC) states that (p.539-40): For a sentence of the form Pl.conn-P 2 where "conn" ~s a temporal connectmve such as "when" "before", "after" etc.. line up the S pomt~ of Pt and F 2, that IS. wnte the tense structure of Pl and P2' lining uP the S points. Move R 2 to under R 1, placing E 2 accorc=ngiy to preserve LOC on the bes=c tense structure. It can be easily seen that my proposed tense structure for present 3. See- toot;tote 7 ~ 11 Of Morn~Itein'$ ~IO~'. 4 There rely Oe clouOts ~ re0a~s II~ ac=~ta~ilily of 5dii. An ~ui¥1m~ t~ ot 5¢iii ~ a¢clmtal~ ,~ Dan~ (JeSl~lrJI4ll~. D.271]. A~IO. in French, IRe I ~'e~t moment (Comne76, D.al). perfect does not work with RTC since it produces the wrong predictions for the following two sentences: [1 ] "John came when we have arrived. [2] John comes when we have arrived. For [1] the new analysis is: E.R~S --- E,R~S I I E~R. S EIR~S which does not violate the RTC and hence predicts (wrongly) that [1 ] =s acceptable. Similarly, for [2], the new analys,s is: S.R,E -- S.R.E . (violates RTC) I I E~R. S EIS, R which prediCtS (wrongly) that [2] is bad. This may explain why Hornstem decides to use E_S,R for the present perfect because =t can account for {1 } and {2] with no difficulty. However. I suggest that the correct move snould be to abandon RTC which has an asymmetrical property, I.e., it matters whether Pl or P2 =s put on top, and does not nave an obwous semanttc explanatmon. (See Hornstetn's footnote 20, p.54,.3). My second proooTw31 is then to replace RTC with a Rule of R.permanence (RP) stating that: (RP): Both the S and R points of Pl and P2 must be ahgned without any mamp-latmn of the tense structure for P2" Thus sentence [3l: {3] .John came when we had arrivecl. ~s acceptable because its tense structure does not v=otate RP: E.R__S (OK: S and R points are EIRI$ already aligned) NOW, ~et us reconsider sentences [1] and [2]. Sentence [1] is not acceptable uncler RP and the new tense structure for present perfect since: E.R._S (violates RP: r.ne two R's EIR.S are not aligned) Sentence [2] ,s still a problem. Here I snail maKe my third proposal, namely, that tne simple present admits Iwo Ioas~c tense structures: SIMPLE PRESENT S.R.EandE.R,S Given this modification, sentence [2] will now be acceptable since: E.R,S (S and R points are aligned) E~R. S 20 To examinethe adeouacy of RP. letuslook at more examples: [4] John has come when i. "we arrived if. "we had arrived iii. we arrive iv, we have arrived v. "we will arrive The corresponding analysisisasfollows: [4'] i. E__R.S (BAD) E. RmS if. E__R.S (BAD) E__R__S iii. E__R.S (OK) E.R.S iv. E~R.S (OK) EoR, S v. E~R,S (BAD) S~R.E We can see that the proposed theory correctly predicts all ol the five cases. There ts. however, an apparent counter.example to RP which, unlike RTC, is symmetncal, Le., it does not ma~ter which Of the Pi's =s put on the top. Cons=der the following two sentences: [5] i. John will come when we arrive. if. "John arrives when we wi11 come. RP predicts both 5i and 5if will be unacceptable, but 5i seems to be good. It ts examples like 5i and 5if, I believe, that lead Hornstem to propose the asymmetrical rule RTC. But I think the data are m~slead=ng because =t seems to be an ,diosyncrasy of Enghsh grammar that 5i =s acceptable. In French, we have to say an ecluwatent of "John will come when we wdl arrive" with the temporal adverb=al expl=c~tly marked with the future tense (Jespersen6~, p.264). Thus. the acceptability of sentences like 5i can be explained Oy a !ormc=ple of Economy of Speech allowing us to om=t the future tense of the temporal adverbial if the matrix clause is already marked w~th the tuture tense. 2.1.3 Sequences of Tense Now, we clescribe the third and final grammatical constraint on sequences of tense. Consider the following sentences: [6] John said a week ago that Mary (a) will leave in 3 days. {b) would In the (a) sentence, the temporal interpretatmn of the embedded sentence is evaluated w=th respect to the moment of speech. Thus. for instance, [6a] means that Mary's leaving is 3 days alter present moment of speech. On the other hand, the (b) sentence has the temporal intemretatlon of the embedded sentence evaluated with respect to the interpretation of the matrix clause, Le., [6b] means that Mary's leaving is 4 days before the moment of speech. To account for the sequence of tense in reported speeCh, Hornstein proposes the following rule: (SOT): For a sentence of the form "P1 that P2"' assign S 2 with E 1 • In general, for an n.level embedded sentence, SOT states that: assign S n with En. 1 (Hornslem81, p.140). With the SOT rule, [6a] and [6b] will be analyzed as follows: [6a'] a week ago I Et.RluS 1 S2__R2,E 2 ==> E 2 is 3 days [ after S I in three days [s~'] a week ago I EI.RI~S l I S2uR2.E 2 I in three days ==> E 2 is 4 days Defore S I The local property of SOT, Le., linking occurs only between nth and (n-1)th level, has a n~ce conseouence: ,t ex0tams wny a third level nested sentence like [7]: [7] John said a week ago (a) that Harry would 0elieve in 3 days (b) that Mary (i) will leave for London in 2 days (c) (ii) would has only two temporal readings: (1) sn 7(ci). Mary's leaving is two days after the moment of speech, and (2) m 7(cii), Mary's leaving Js two clays Oetore the moment Of speech. In part=cular, there ~s not a temporal reading corresponding to the situatmon fn which Mary's leaving ms hve days before the moment of speech. We would obta,n the th=rd reading if SOT allowed non-local hnking, e.g., ass=gned S 3 with E 1 . 2.2 Explanations of the Formal Constraints In the prewous section, we have examined three formal constraints on the denvatmn of complex tense structures from the Oas,c tense structures: (1) LOC. (2) RP, and (3) SOT. NOw, I want to show how the LOC falls out naturally from the computat=onal propertms of a temporal reasoning system along the line suggested by Allen (Allen84, Allen83), and also how the RP and SOT constraints have mtuitwe computat=onal motwation. The bes,s of Allen's comDutat=onal system ts a temporal logic based on intervals instead of time points. The temporal logic cons=stS of seven basic relations and their mveraes (Allen84, D.129, figure 1): 21 Relation svmbol symbol for meaninQ inverse X Oefore Y < > XXX YYY X equal Y = = XXX YYY X mee~s Y m mi XXXYYY X overlaps Y o oi XXX YYY X during Y d di XXX YYYYY X starts Y s si XXX YYYY X finishes Y f f i XXX YYYY The reasoning scheme tsa form of constraint propagation in a network of event nodes hnKed by temporal relat,onsmps. For instance, the situat=on as clescribed in the sentence "John arrived when we came" is represented by the network: A -- (> < m mi =) --> B \ / (<)~,~ (<1 L/ NOW where A = John's a r r i v a l and B = Our coming This network means that both event A and event B are before now, the moment of speech, while A can be before, alter or s=multaneous with B. When new temporal relatlonsmos are added, the system maintains consistency among events by orooagat,ng the effects of the new relatmnsmos wa a TaO/e ol Translt~wty Re/at~onsmps that tells the system how to deduce the set of adm=ss=ble relat=onsmos between events A and C given the retatlonsh=ps between A and B, and between B and C. Thus, for instance, Irom the relationships "A during B" and "B < C", the system can deduce "A < C". One orooerty of the constraint propagation algorithm generally =s that further mlormatlon only causes removal of members from the set of admissible labels, i.e., teml=orat relatlonsmDs, between any two old events (Allen83, p.8,35). NO new label can De added to the admissible set once it is created. Let us call Ires property of the constraint propagntlon algor, tnm the Delete Labei Condit=on (DLC). DLC can be mteroreted as a k=nd of reformation monotonicity condition on the temocral representation. Let u5 further restrict Allen's temooral logic to instantaneous intervals. ~.e.. each event corresponds to a single moment of time. The restricted logic has only one or,mitwe relat,on, <, and three ctner denved relat,ons: <, >, and >. There is a straightforward :ranslat=on of Hornstein's SRE notation =nto the network re=)resenta'Jon, namely, replace each comma symbol "," by < (or >. witr the event symbols reverse their roles) and each underscore symbol "~" by > (or < with similar a¢liustment on the event symbols). Thus, a tense structure such as: E_R,S can be represented as: s -(>)->E (> =) (>) R With this representation scheme, we can prove the following theorem: ~1) DLC--LOC Proof Let A and B range over { S, A1 E } and A = B. There are five bas=c types ol violations of the LOC: 1. A_B -- B_A 2. A B -, A,B 3. A_B --., B.A 4. A,B -- B,A 5. A,B -., B_A We can see that each of these cases ~s a v=olatlon of the DLC. To spell this out. we have tt~e following operations on the constraint network corresponding tO the above vlolat=ons of the LOC: f'.A-(<)-)'B --A-(>)->B 2'.A-(<)->B --A.(< = ).)B 3'.A.(<).>B -- A.(> = )->B 4'.A.(< = ).>B - - A - t > = )->B 5".A.(< = )->B --A.(>)->B In each of these cases, the operation involves the addihon of new members to the adm=ss=Dle set. Th=s =s ruled out Ioy DLC. Thus, we have the result that if LOC =s wolated, then DLC =s v=olated. In other words. DLC -- LOC. 5 --I The second constraint :o be accounted for is the RP which effecbvely states that (a) the 50omts of the matrix clause and the temporal adverb=al must be ~clent=cal. and (b) the IR !0dints of the matrix clause and the temporal aOverbml must be ~dent=cal. One nypothests for th,s rule is that: (H1) Only the matrix clause mtrocluces distract S and R points. in other words, the non-subcate<Jonzable temporal adjuncts do net ado new S and R points. H1 has to be modifieO slightly to taV, e the case of embedded sentence =nto account, namely, {Revised RP): Only the matrix clause and the subcategorizable SCOMP or VCOMP can introduce d=stinct S and R points. where SCOMP and VCOMP stand for sentent=al complement and S. The ¢om,e~e o~ thss Ihe~n ~' nm true. 22 verbal complement respectively. The interesting point is that both the rewsed RP and the locality property of SOT can be easily implemented ,n processing systems which have certain Oounoeoness constraint on the phrase structure rules (e.g., ,nformation cannot move across more than one bounding node). To illustrate this. let us consider the following tense interpretation rules embedded in the phrase structure rules Of the Lexlcal-Funct,onal Grammar: S -- NP VP ($ S-POINT) = NOW VP -- V (NP) (ADVP) (S') ($ S-POINT) = { (T E-POINT) if ($ tense) = PAST NOW 0tnerwise ADVP ~ Adv S S' -- COMPS Adv ~ when (T T-REL) = { <.>.=,m.mi } before (T T-REL) = { > } The S rule introduces a new S point and sets its value to now, The VP rule has two effects: (I) it does not introduce new S or R points for the temooral adveriolal phrase, thus imohcltly incorporating the revised RP rule, and (2) it looks at the tense of the embedded sentential comolement, setting the value of its S point to that of the E point of the higher clause if the tense is past, and to now, otherwise. Thus. tn th~s way, the second effect accomplishes what the SOT rule demands. 2.3 Implications for Learning If the revisions to Hornstem's theory Of tense are correct, the natural cluest=on to de asked is: FlOW dO speakers attain such Knowledge? This Question has two Darts: (1) How do spea~ers acquire the formal constraints on SRE derivation? and (2) How do speakers learn to associate the appropriate SRE structures with the baszC tenses of the language? Let us consider the first sub-Question. In the case of LOC, we have a neat answer .. the constraint need NOT be learned at all! We have shown that LOC falls out naturally as a consequence of the architecture and processing algorithm ot the computational system. AS regards the constraint RP. the learner has tO acquire something similar to Hr. But H1 IS a fairly simple hypothes~s that does not seem to require induct=on on extenswe hngmstic data. Finally, as we have shown =n the previous section, the boundeQness of the flow of information ol a orocessmg system leads directly to ~he locality orooerty of the SOT. The partTcular linking of S and E points as stipulated by the SOT, however, is a parameter of the iJnwersal Grammar that has tO be fixed. What about the second sub.question? How do speake~ ~earn to pair SRE conhguratlons wllh the basic tenses? There are 24 possible SRE configurations seven of which get grammat,calized. Here I want to prooose a principle of marKeOness ol SRE structures that has a natural computational motivation. Let us recall our restrictive temporal logic of instantaneous interval with one primitive relation, <, and three derived relations: <, >, and >. Represent a SRE configuration as follows: S ~ E The admissible labels are among { <. < =, >, > = }. So there are altogether 64 possible configurations that can be classified into three types: (1) Inconsistent labelings (16). e.g.. S\--( > )-~ E ? (<) (<) R (2) Labelings that do not constrain the SE given the labelings of SR and RE (32), e.g.: s--( ?)-.~ E (<) (>) R link (3) Labelings that are consistent and the SE )ink is c0nstra~ned by the SR and RE ]~nk (16), e.g.. s -(<)-> E (<) (<) R If we assume that labehngs of the third type corresPOnd tO the unmark, ed SRE configurations, the following division of unmarKeO and marked configurations is obtained: UNMARKED MARKED E~R~S E. RoS EIR.S E,R.S S,R.E S, RoE S~R.E S~RoE PAST PERFECT E~SoR SIMPLE PAST E.SoR PRESENT PERFECT EoS,R SIMPLE PRESENT E.S,R SIMPLE PRESENT S I E o R SIMPLE FUTURE SoE. R S, EmR S.E.R RoSoE Ro$.E R~E~S R~E,S R, E~S R.SmE R,E.S R.S.E FUTURE PERFECT There are only eight unmarked tense structures corresponding to the sixteen SRE netwo~ configurations of type 3 23 because a tense structure can be interpreted by more than one network rebresentations, e.g., the Past Perfect (E_R_S) has the tollowing two configurations: S--t:>).-* E S-i(> =)--> E (>) .,VI (>) (>)~ ;>) R The interesting result is that five out of the six basic tenses have unmarked SRE configurations. This agrees largely with our pretheoretlcal intuit=on that the SRE configurations that correspond to the basic tenses should be more "unmarked" than other possible SRE configurations. The fit. however, is not exact because the future perfect tense becomes the marked tense in this classification. Another prediction by this principle of markedneas is that both the simple future (S_R.E') and distant luture (S_R_E) are unmarked. It would 0e interesting to find out whether there are languages =n which the distant tuture actually gets grammat=calized. The final point tO be made =s about the second type of labelmgs. There are two Other possible ways of grouping the laOehngs: (1) given SR and SE. those labehngs ~n winch RE ~s constrained, and (2) given SE and HE. those in which SR is constrained. But these types of grouping are less likely because they would yield me s~mple present tense as a marked tense. Thus. they can be ruleO out iOy relatively few linguistic data. 3. Verb Aspect In cons=clenng the problem of tense, we have restricted ourselves to a subset of Aliens temporal logic, namely, using a temporal structure <:T._<> with hnear oraenng of time points. TO make use of the full Dower of Allen's temporal logic, we now turn to the problem of verb aspect. The two mare problems of the study of verb aspect are the correct charac!erizat~on of (1) the three funclamental types of verb predtcatlon according to the situation types that they signify .. state, process and event, and (2) the p(=rspectwes from which a situation ts viewed, or its truth evaluated -- s~mpte or progreSSive. 6 in the first part of his paper. Allen attempts to prowde a formal account of *he state/process/even', d~s~mctlon using a temDoral logic. However. I beheve that htS charactenzahon fa¢ls to capture welt.Known patterns of tense =mot;cations, and does not make the distinction ioetween situation types and perspective types funclamental to any adequate account of verb aspect. In the next 3ect=on. I will present some data that an,/ theory of verb aspect must be able to explain. 3.1 Data 3.1.1 Tense Implications 1, Statives rarely take the progressive aspect 7 , e.g., I know the answer. "1 am knowing the answer, 2. For verb predications denoting processes, the progressive of the verb form entails the perfect form, i.e., x is V.ing -- x has V-ed. For instance, John ts walking ---, John has walked. 3. For verb predications denoting events, the progresswe of the verb form entads the negation of the perfect form, Le., x is V.mg -- x has not V.ed. For instance, John ~s bumidmg a house ~ John has not budt the house. 3.1.2 Sentences containing When Sentences containing clauses connected by a connective such as "when" have different aspect tnterpretat~ons depending on the s~tuatlon types and perspective types revolved. [9] John laughed when Mary drew a circle. Situation/Per~oechve type: X = process/simple; Y = event/s~mple Inl[ernretatlon: X can oe before, after or s=multaneous with Y [10] ,;ohn was laugnmg when Mary drew a circle, Situation/P~rsoective type: X = orocess/progresswe; Y = event/s=mble Int~roretatte, n: Y occurs during X. [11 } ,John was angr'! when Mary drew a cwrcle. Situanon/Persoectwe Woe: X = s=ate/slmole: Y = event/simple Interr~retatton: X can Ioe before, after, simultaneous with or during Y. [ 12] John was laugnmg when MaP/was drawing a circle. ~it~atmn/Pe~cective Woe: X = croces~/~rogresswe: Y = event/progresswe Inte,pr~ta'~lon: X must be s~multaneous with Y. 3.2 Formal Account of the State/Process/Event distinction Define: 6. Some of tl~ oener worlu~ are: Vcmdledr/. C~mne78. ~ 7 8 . ?. It ~ ofllm been ~ OUl trill some Slal~ves do ta~e the oro~'es..~ve form. E.G., "I am rnmkmg aOoul U~ exam.'. "The doctor ts se~ng a pauenl." Ploweves,. a ~lut=l~l¢~ slucly ~ ~ that ~ tam*~ar stal,ve= rarely occur ~ln the prl)gress~ve aspect -. ~ thin 2% ol me lm~ (01,1~=3. secUon 2.2) 24 {a) X C Y ,,.-* Xd Y V XsY V Xf Y (b) X C Y *-, X C Y V X e~ualY (c) mom(t) ".-. t is an instantaneous ,nterval, i.e., consists of a smgle moment of time (d) per(t) '-- t is a non-instantaneous interval 8 where X and Y are generic symbols denoting state, event or process. 3.2.1 Progressive (PROG): OCCUR(PROG(v,t)) -- morn(t) A ~ OCCUR(v,I) A (3 r)(t d t' A OCCUR(v,t')) 9 The progresswe aspect ss the evaluation of a situation from an interior oOmt t of the s~tuatlon which has the prooerty that though the sentence ts not true at that instantaneous ~nterval, ~t =s true m a nonqnstantaneous ~nterval r properly containing t. 3.2.2 State (Sl): OCCUR(s,t) -- (V t')(mom(t') A t' C t -- OCCUR(s,t')) A state verb is true at every instantaneous interval of t. The clefmitlon is slmttar to Aliens H. 1 (Allen84, 13.130). The following theorem shows that state verbs do not occur with the progressive aspect. (S.THEOREM): "OCCUR(PROG(s,t)) Proof CCCUR(PROG(s.t)) morn(t) A -'~ OCCUR(s,t) A (~1 t')(t dt' A OCCUR(s.t')) -- OCCUR(s.t') tor some t containing t -- OCCUR(s.t) (by S1) '. contradiction. -t This theorem raises the tollow=ng quest=on: Why do some statlves occur w~th the orogresswe? I th~nK there are two answers. First, the verb in question may nave a use other than the statwe use (e.g. "have" is a statJve when tt means "possess=on", and not a s,*atlve when it means "experiencing" as ~n "John =s having a good time tn Paris.") Second. the English progressive may have a second meamng m addit,on to that cnaractenzed by PROG above. A freouent usage of the progresSwe =s to and=care short duration or temporariness, e.g., m "They are hying m CamDrldge"/"They live =n Cambridge". 8. This SeCtIOn loenehL~ from the Ins~lhtS o! ear~ Taylor ("rayldrT~. 9 & rewewet O! this oaOer po,nts out tnot me PI::IOG axiom seems to imDty tRat if something IS IO I~rOCJtlL~, II f'flg..~l complete. Thus. ,f Max is Oraw,ng a circle. II'=en at some. tuture time. ne must nave drawn the cIn:le. This =nt~ence =S clearty false because ;~efe ~ noth,ng contradiCtOry aJoou! "Max was Orawmg a ca:tie Out he never drew ,t." For ,ns[aoce. Max ml(Jnt su!tef a heart altaclL anti ~J auOOe~y. This =met.ante problem of the orogressNe 'orm ot a evenl veto =s xnown as If~ ,rnDertectlve paraoox in the hteralure One way oul is to Oeny mat ~a, was really drswmg a circle wflen ne oleti Rather ne was drawing sornelhmCJ ~'hlCh woulo nave deed a circle had I~t not d~¢l. This type ot analySiS would involve some machinery trom'Posslote WOlIO SemanUc$. 3.2.3 Process A process verb can be true only at an interval larger than a single moment. This property differs crucially from mat of the statwes. (Pl): OCCUR(p,t) -- per(t) (P2): OCCUR(p,t) -- (V t')(per(t') A r C_ t -- OCCUR(p,t')) The following theorem shows that for a process verb, the progressive verb form entails the perfect form. (P.THEOREM) OCCUR(PROG(p,t)) -- (3 t')(per(t') A t'< t A OCCUR(p,t')) Proof OCCUR(PROG(p,t)) -- morn(t) A "~ OCCUR(p.t) A (3 t')(t d t' A OCCUR(p.t')) --... OCCUR(p.t') for some r such that t d t' -- 3m 1 Et'.m l<t (slncetdt') -- 3m 2Et'.m l<m 2<t (bydensltyoft=mepolnts) Let t" be the interval [m 1 .m2] Then. we have t" ( t and t" C t'. By (P2). we have OCCUR(p,t"). That is, 0 has occurred. --I. The charactenzat,on of process verb by Allen (ms O.2) is less sat=slactory because ~t combines both the notion of Drogresswe asDect (his "OCCURRING") and me process verb into the same axiom Furthermore. the difference between me predicate "OCCUR" and "OCCURRING" ~s not adequately exolamed in his paper. 3.2.4 Event An event verb shares an ~moortant proDerty with a brocess verb. namely. ,t can be true only at a non.instantaneous interval. (El): OCCUR(e.t) -- !bet(t) (E2): OCCUR(e.t) -- (V r)(per(t') A r C t -- "~ OCCUR(e,r) The following theorem snows that the ~rogresslve form of an event verb entads the negal~on of the perfect form. (E-THEOREM): OCCUR(PROG(e.t)) -- '-,(3 r)(per(t') A r< t A OCCUR(e,t')) Proof AS in the ~roof of (P.THEOREM). we can find a non-~nstantaneous interval t" such that t" < t and t" C t' But |or any such t". we have OCCUR(e.t") Pecause of (E2). That is. it cannot be the case t11at e has occurred. --I. Again the crucial property (El) is not captured by Allen's charactenzat=on of events (ms O.1 ). 3.3 Constraint on temporal interpretations involving When To account for the variety of aspect interpretations as presented in section 3.1.2, I propose the following constraint on 25 situation/perspective type: (C-ASPECT]: Let "dynamic" stand for a process or event. (a) simple/dynamic .-* morn(t) (b) simple/state ..- per(t) (c) progressive/dynamic -.-* per(t)/k _C PerspeCtive is a way of looking at the situateon type. For process or event, the simple aspect treats U~e situation as an instantaneous interval even though the situation ~tself may not be instantaneous. For state, the simple aspect retains its duration. The progressive aspect essentially views a process or event from its inter=or, thus requiring a stance in which the situation is a non.instantaneous interval and the admissible temporal relationship to be the C_ relations, i.e., s, s~, I, fi.d. di, eoual. Let me show graphically how C.ASPECT accounts for the aspect interpretations of sentences {9] to {12]. [g'] simple/process WHEN simple/event Admissible relations: ( m : mi X Y XY X YX Y ) Y X [to'] AOmissib]e relations: progressive/process WHEN slmple/event si di fi XXX XXX XXX Y Y Y [11'] simple/state WHEN s~mple/event Admissible relations: > mi si di fi Y XXX YXXX XXX XXX XXX Y Y Y m < XXXY XXX Y [12'] prog/process WHEN prog/event Admissible relations: : f fi s si XXX XXX XXXX XXX XXXX YYY YYYY YYY YYYY YYY XX XXXX YYYY YY 4. Conclusion In this paper, I nave exam=ned two problems regarding linguistic semantics: tense and asDect. Important relationships between al~s;ract constra,nts governing lingu=st,c behavior and a computational scheme to reason aDout temporal relationships are discussed. In particular, I have shown that certain formal constraints, such as the Linear Order Constraint on tense, fall out naturally as a consequence of some computational assumptions. The interesting result =s that this formal constraint need not be learned at all, Another important role of a representation scheme in explaining phenomena that exist on a entirely different -. linguustic -- level is illustrated by the formulation of the C-ASPECT constraint to account for ~nterpretatlons of sentences conta,ning temporal connectwes. The study of linguistic semanhcs also sheds light on a representation of tJm~ hy reveahng the fundamental distractions that must be made, e.g.. a tensed sentence revolves three distract time points, and the aspectual interpretations reclu~re instantaneous/non-instantaneous ~nterval distinction. Acknowledgments ; would like to lh:.mk Prof Robert C. BerwIck lor his insi(.Jhtful sugge'.';hon Ihat lhe r(flahonshlp t)~.~lwHP.n a co(jnd~ve mP..ory Of lime all(l a hll(llLll.'3tlC theory of lense ts a Irullhll 'and mq)ortam area for research. He also contrtbuled 5ut)stam~;.llly to lhP. presenlalion of lhLs paper Finally, I LIIso thank Nort)eft Hornstem who prowded useful comments durm(j the revision el this paper. 5. References [Allen84] james Allen, "Towards a General Theory of Action and Trine", AI JournBI , Vol 23, No. 2, July, 1984. [AlienS,3} "Maintaining Knowledge aJ3out Temporal Intervals". CACM Vol 26. No. 11. NOV, 1983. [Comrm76] Bernard Comne, A~oect, Camior=dge University Press, 1976. [Hornstem81 ] Norioert Hornstem. "The study of meaning m natural language", in: Exolanabon tn (~tnculstlcs, Longman, 1981. [Hornstem77} "Towards a Theory of Tense", Lmqu~st¢c InQuiry, Vol 8, No. 3, Summer 1977. {Jesi3ersen65] Otto Jcspersen, The Phdosoohv of Grammar, Norton L~brary 1965. IMoure=;~tOS78} AP.D. Mouremtos, "Events, processes and soates '', L.:noutsttC3 and Ph=losoohv 2, 1978. [Ota63] KJra eta, Tense and AsPect Of Present Day American Enqil~h, Tokyo. 1963. [TaylorTTJ ~arry Taylor, "Tense and Continuity", LinQuistics and Philosochv 1, 1977. [Vendler67] Zeno Vendler. Linaufstics and Philosgghy, Comell University Press. 26
1985
3
Stress AJ~p,aMm is Lett~ m Se,,,td Rats fer Speech Sy~ Kenneth Church AT&T Boll Laboratories Abreact This paper will discuss how to determine word stress from spelling. Stress assignment is a well-established weak point for many speech synthesizers because stress dependencies cannot be determined locally. It is impossible to determine the stress of a word by looking through a five or six character window, as many speech synthesizers do. Well- known examples such as degrade / dbgradl, tion and tMegraph / telegraph5 demonstrate that stress dependencies can span over two and three syllables. This paper will pre~nt a principled framework for dealing with these long distance dependencies. Stress assignment will be formulated in terms of Waltz' style constraint propagation with four sources of constraints: (1) syllable weight. (2) part of speech. (3) morphology and (4) etymology. Syllable weight is perhaps the most interesting, and will be the main focus of this paper. Most of what follows has been implemented. I. Back~e,,sd A speech synthesizer is a machine that inputs a text stream and outputs an accoustic signal. One small piece of this problem will be discussed here: words -- phonemes. The resulting phonemes are then mapped into a sequence of Ipe dyads which are combined with duration and pitch information to produce speech. text -- intonation phrases -- words phonemes -- Ipc dyads + prosody -- accousti¢ -~ There are two general approaches to word -- phonemes: • Dictionary Lookup • Letter to Sound (i.e.. sound the word out from basic principles) Both approaches have their advantages and disadvantages; the dictionary approach fails for unknown words (e.g.. proper nouns) and the letter to sound approach fails when the word doesn't follow the rules, which happens all too often in English. Most speech synthesizers adopt a hybrid strategy, using the dictionary when appropriate and letter to sound for the rest. Some people have suggested to me that modern speech synthesizers should do away with letter to sound rules now that memory prices are dropping so low that it ought to be practical these days to put every word of English into a tiny box. Actually memory prices are still a major factor in the cost of a machine. But more seriously, it is not possible to completely do away with letter to sound rules because it is not possible to enumerate all of the words of English. A typical college dictionary of 50,000 hcadwords will account for about 93% of a typical newspaper text. The bulk of the unknown words are proper flOUfl-q. The difficulty with pmpor nouns h demonstrated by the table below which compares the Brown Corpus with the surnames in the Kansas City Telephone Book. The table answers the question: how much of each corpus would be covered by a dictionary of n words? Thus the first line shows that a dictionary of 2000 words would cover 68% of the Brown Corpus, and a dictionary of 2000 names would cover only 46% of the Kansas City Telephone Book. It should be clear from the table that a dictionary of surnames must be much targar than a typical college dictionary ('20,000 entries). Moreover. it would be a lot of work to consu'u~ such a dictionary since there are no existing computer readable dictionaries for surnames. Size of Brown Size of Word Dictionary Corpus Name Diczionary 2000 68% 2000 4000 78% 4000 6000 83% 6000 8000 86% 8000 lO000 89% 10000 12000 91% 12000 14000 92% 14000 16000 94% 16ooo ! 800O 95% 18000 20000 95% 20000 22000 96% 22000 24000 97% 24000 26000 97% 26000 28000 98% 28000 30000 98% 30000 32000 98% 32000 34000 99% 34000 36000 99% 36000 38000 99% 38000 40(3O0 99% Kansas 46% 57% 63% 68% 72% 75% 77% 79% 81% 83% 84% 86% 87% 88% 89% 9O% 91% 91% 92% 93% 246 Actually, this table overestimates the effectivene~ of the dictionary, for practical applications. A fair test would not use the same corpus for both selecting the words to go into the dictionary and for testing the coverage. The scores reported here were computed post hoc, a classic statistical error, l tried a more fair test, where a dictionary of 43777 words (the entire Brown Corpus) was tested against a corpus of 10687 words selected from the AP news wire. The results showed 96% coverage, which is slightly lower (as expected) than the 99% figure reported in the table for a 40000 dictionary. For names, the facts are much more striking as demonstrated in the following table which teats name lists of various sizes against the Bell Laboratories phone book. (As above, the name lists were gathered from the Kansas City Telephone Book.)* Size of Word List Coverage of Test Corpus (Kansas) (Befl Labs) 2000 400O 60OO 8000 I0000 20000 4000O 50000 6000O 9OOOO 0.496 0.543 0.562 0.571 0.577 0.589 0.595 0.596 0.596 0.597 Note that the asymptote of 60% coverage is quickly reached after only about 5000-1000 words, su88estiog (a) that the dictionary appnxtch may only be suitable for the 5000 to 1000 mint frequent names because larger dictionaries yield only negligible improvements in performance, and (b) that the dictionary approach has an inherent limitation on coverage of about 60%. To increase the coverage beyond this, it is probably neceqsary to apply alternative methods such as letter to sound rules. Over the past year l have been developing a set of letter to sound rules as part of a larger speech synthesis project currently underway at Murray Hill. Only one small piece of my letter to sound rules, orthography ~ stress, will be discussed here. The output streu assignment is then used to condition a number of rules such as palatalization in the mapping from letters to phonemes. 2. we/ght as ~ i,termt~tm ~ of Relm~mmutm Intuitively, stre~s dependencies come in two flavors: (a) those that apply locally within a syllable, and (b) throe that apply globally between syllables. Syllable weight is an attempt to represent the local stress constraints. Syllables are marked either heavy or light, depending only on the local 'shape' (e.g., vowel length and number of Ix~t-vocalic consonants). Heavy syllables are more likely to be • Admittedly. this teat is somewhat unfair to the dictionary appma©h sinca: thu ethnic mzxture in gamuut City is very differeat from that found here at Bell t.aboflltot~ stressed than light syllables, though the actual outcome depends upon contextual constraints, such as the English main stress rule, which will be d ~ shortly. The notion of weight is derived from Chomsky and Halle's notion of strong and weak clusters [Chonuky and Halle] (SPE). In phonological theory, weight is used as an intermediate level of representation between the input underlying phonological representation and the output stress aaignment. In a similar fashion, [ will use weight as an intermediate level of representation between the input orthography and the output strew. The orthography -- stress problem will be split into two subproblems: • Orthography -- Weight • Weight ~ Stress 3. What is Sy~ Weight: Weight is a binary feature (Heavy or Light) assigned to each syllable. The final syllables of the verbs obey, maintain, erase, torment. collapse, and exhaust arc heavy because they end in a long vowel or two consonants, in constrast, the final syllables of develop, astonish. edit. consider, and promise are light because they end in a short vowel and at moat one consonant. More precisely, to compute the weight of a syllable from the underlying phonological representation, strip off. the final consonant and then pane the word into syllables (assigning ¢omommts to the right when there is ambiguity). owK•y Weight Rea.~oa heavy final syllable long vowel tor-men heavy final syllable closed syllable diy-ve-lo light final syllable open syllable & short vowel Then. if the syllable is clo~ (i.e., ends in a consonant as in tor.men) or if the vowel is marked underiyingly long (as in ow.bey), the syllable is marked heavy. Otherwise, the syllable ends in an open short vowel and it is marked light. Determining syllable weight from the orthography is considerably more difficult than from the underlying phonological form. I will return to this question shortly. 4. we/slt -- Stnm Global stress assignment rules apply off" the weight representation. For example, the main stress rule of English says that verbs have final stress if the final syllable is heavy syllable (e.g., obey), and penultimate stress if the final syllable light syllable (e.g., develop). The main stress rule works similarly for nouns, except that the final syllable is ignored (extrametrical [Hayes]). Thus, nouns have penultimate stress if the penultimate syllable is heavy (e.g, aroma) and antipenultimate stress if the penultimate syllable is light (e.g., cinema). £x~l~ Pesmilimte Wei~lst R ~ heavy long vowel verr6nda heavy closed syllable cinema light open syllabic & short vowel 247 Adjectives stress just like verbs except suffixes are ignored (extrametrical). Thus monomorphemic adjectives such as diacr~et, robfist and cbmmon stress just like verbs (the final syllable is stressed if it is heavy and otherwise the penultimate syllable is stress) whereas adjectives with single syllable suffixes such as -al, -oas. -ant, -ent and -ire follow the same pattern as regular nouns [Hayes, p. 242]. Stress Pattera of Suffixed Adjectives Light Penultimate Hury Peaaidmate Heavy Pmultimale municipal adjectival frat&'nai magn~minous desirous trem~ndoas significant clairv6yant relfictant innocent complY, cent dep6'ndent primitive condficive exp~-nsive S. SWeat's WeiOt Table A large number of phonological studies (e.g., [Chomsky and HalleL [Liberman and PrineeL [Hayes]) outline a deterministic procedure for assigning stress from the weight representation and the number of extrametrical syllables (1 for nouns, 0 for verbs). A version of this procedure was implemented by Richard Sproat last summer. For efficiency purposes. Sproat's program was compiled into a table,, which associated each possible input with the appropriate stress pattern. Sweat's Weight Table Part of Speech Weight Verb Noun H .I I L l I HH 31 I0 HL I0 I0 LH 01 I0 1 LL I0 I I0 1 HHH 103 ] 3101 HHL 310 I 310 HLH 103 1(30 HLL 310 10O LHH 103 010 LHL 010 010 LLH I03 10O LLL 010 100 etc. Note that the table is extremely small. Assuming that words have up N to N syllables and up to E extrametrical syllables, there are E~2 ~ possible inputs. For E - 2 and N - 8, the table has only 1020 entries, which is not unreasonable. 6. Amlolff with Walt-' Comtndat Prolmptiea Paradigm Recall that Waltz was the first to showed how contraints could be used effectively in his program that analyzed line drawings in order to separate the figure from the ground and to distinguish concave edges from convex ones. He first assigned each line a convex label (+), a concave label (-) or a boundary label (<, >), using only ~ocal information. If the local information was ambiguous, he would assign a line two or more labels. Waltz then took advantage of the constraints impmed where multiple lines come together at a common vertex. One would think th~ t there ought to be 42 ways to label a vertex of two lines and 4 '~ ways to label a vertex of three lines and so on. By this argument, there ought to be 208 ways to label a vertex. But Waltz noted that there were only 18 vetex labelings that were consistent with certain reasonable assumptions about the physical world. Because the inventory of possible labelings was so small, he could disambiguate lines with multiple assignments by checking the junctures at each end of the line to see which of the assignments were consistent with one of the 18 possible junctures. This simple test turned out to be extremely powerful. Sproat's weight table is very analogous with Waltz' list of vertex constraints; both define an inventory of global contextual constraints on a set of local labels (H and L syllables in this application, and +. -, >, < in Waltz application). Waltz' constraint propagation paradigm depends on a highly constrained inventory of junctures. Recall that only 18 of 208 possible junctures turned out to be grammatical. Similarly, in this application there are very strong grammatical constraints. According to Spmat's table, there are only 51 distinct output stress a.udgnmeats, a very small number considering that there are 1020 distinct inputs. Pe~ible Stress Assignments I 103 3103 020100 0202013 3 310 02010 020103 2002010 0l 313 02013 200100 2002013 31 010O 20010 200103 2020100 I0 0103 20013 202010 2020103 13 2001 20100 202013 3202010 010 2010 20103 320100 3202013 013 2013 32010 320103 02020100 100 3100 32013 0202010 02020103 20020100 20020103 20202010 20202013 32020100 32020103 The strength of these constraints will help make up for the fact that the mapping from orthography to weight is usually underdetermined, In terms of information theory, about half of the bits in the weight representation arc redundant since log 51 is about half of log 1020. This means that I only have to determine the weight for about half of the syllables in a word in order to assign stress. The redundancy of the weight representation can also been seen directly from Sproat's weight table as shown below For a one syllable noun, the weight is irrelevant. For a two syllable noun, the weight of the penultimate is irrelevant. For a three syllable noun, the weight of 248 the antipenultimate syllable is irrelevant if the penultimate is light. For a four syllable noun, the weight of the antipenultimate is irrelevant if the penultimate is light and the weight of the initial two syllables are irrelevant if the penultimate is heavy. These redundancies follow, of course, from general phonological prin~ples of stresa assignment. Weigi~ by Stress (fee short Noum) Stress Weight ! L H lO LL HL 13 LH HH 010 LHL 310 HHL 013 LHH 313 HHH 100 HLL LLL 103 LLH HLH 0100 LHLL LLLL 3100 HHLL HLLL 0103 LLLH LHLH 3103 HLLH HHLH 2010 LLHL HHHL 2013 LHHH HLHH LHHL HLHL LLHH HHHH 7. O r e ~ - w ~ For practical purposes, Sproat's table offers a complete solution to the weight -- stress subtask. All that remains to be solved is: orthography weight. Unfortunately, this problem is much more dif~cult and much less well understood. 1'11 start by discussing some easy _~_,-e~, and then introduce the pseudo-weight heuristic which helps in some o[ the more di~icuit cas~. Fortunately, l don't need a complete solution to orthography ~ weight since weight ~ stress is so well constrained. In easy cases, it is pmsible m determine the weight directly for the orthography. For example, the weight of torment must be "HH" because both syllables arc cloud (even after stripping off the final consonant). Thus, the stress of torment is either "31" or "13" stress depending on whether is has 0 or I extrametricai final syllables:" (strop-from-weights "HH" 0) -- ('31") ; verb (stress-from-weights "HH" l) -- ('13") ; noun However, meet cases are not this easy. Consider a word like record where the first syllable might be light if the first vowel is reduced or it might be heavy if the vowel is underlyingly long or if the first syllable includes the /k/. It seems like it is imix~sstble to say anything in a case like this. The weight, it appears is either "LH" or "HH'. Even with this ambiguity, there are only three distinct stress assignments: 01, 31, and 13. AaueUy, ~ practk~. ~ ~l~t det~mm~on is ~mp~aud by t0,,, Smm~5~ -crazy ted -ew m, lht be mmx~. New, for example, ths| the tdj~:tiw ~ den ~ m'~/ike the '.~ mrm~w bin:sum Uul sdjm:trmd e~ .~w ie mumuneuncaL (stress-from-weights "LH" 0) -- ('01 ") (strm.(rom.weights "HH" 0) -- ('31") (sirra-from-weights "LH" I) -- ('13") (streas-from-weights "HH" l) -- ('13") 8. Pmdee-Wekdn In fact. it is possible now to use the stress to further constrain the weight. Note that if the first syllable of record is light it must also be unstressed and if it is heavy it also must be stressed. Thus, the third line above is inconsistent. I implement this additional constraint by assigning record a pseudo- weight of "'-H', where the "-." sign indicates that the weight a~sigment is constrained to be the same as the stress assigment (either heavy & stressed or not heavy & not stressed), [ can now determine the possible stress assignments of the p~eudo-weight ".-H" by filling in the """ constraint with all possible bindings (H or L) and testing the results to make sure the constraint is met. (strew-from-weights "LH" 0) -- ('I)1 ") (stress-from-weights "HH" 0) -- ('31 ") (stress-from-weights "LH" I) -- ('13") ; No Good (stress-from-weights "HH" l) -- ('13") Of the four logical inputs, the -- constraint excludes the third case which would assign the first syllable a stress but not a heavy weight. Thus, there are only three possible input/output relations meeting all of the constraints:" Wei~ F.xtramen~ad Syllables Smss LH 0 (verb) 01 HH 0 (verb) 31 HH I (noun) 13 All three of these possibilities are grammatical. The following pseudo-weights are defined: Title Constraints Label H L m S R N ? Heavy Light Unknown Superheavy Superlight Sonorant Truly Unknown weight -, H; stress is unknown weight -- L; stress is unknown (weight - H) ~ (stress - O) weight - H; stress ~ 0 weight - L: stress - 0 (weight - H) =~ (stress - 0) weight is unknown: stress is unknown The eoun should ~mbebly have the mm tO rtt~. tMm d~ nress [3. t u~ that te exmtmaCricef syllabk Ms 3 ~eus if it is buy% and 0 Irns if it is UZ,~t. l"~e ~es8 of tM estrsme~L-sJ 8ylhd~hr is ~ diR'lcz~t ~ is.edict, as dilc~Jsetd ~ou]. 249 [ have already given examples of the labels H, L and -. S and R are used in certain morphological analyses (see below), N is used for examples where Hayes would invoke his rule of Sonorant Destr-~ing (see below), and ? is not used except for demonstrating the program. The procedure that assigns pseudo-weight to orthography is roughly as outlined below, ignoring morphology, etymological and more special cases than [ wish to admit. 1. Tokenize the orthography so that digraphs such as th. gh. wh, ae. ai, ei, etc., are single units. 2. Parse the string of tokens into syllables (assigning =onsonants to the right when the location of the syllable boundary is ambiguous). 3. Strip off the final consonant. 4. For each syllable a. Silent e, Vocalic y and Syllabic Sonorants (e.g., .le. -er. -re) are assigned no weight. b. Digraphs that are usually realized as long vowels (e.g.. oi) are marked H. c. Syllables ending with sonorant consonants are marked N; other closed syllables are marked H. d. Open syllables are marked -. In practice. I have observed that there are remarkably few stress assignments meeting all of the constraints. After analyzing over 20.000 words, there were no more than 4 possible stress assigments for any particular combinatton of pseudo-weight and number of extrametrical number of syllables. Most observed combinations had a unique stre~ assignment, and the average (by observed combination with no frequency normalization) has 1.5 solutions. In short, the constraints are extremely powerful; words like record with multiple stress patterns are the exception rather than the rule. 9. Order~ Muitipte Selmime Generally, when there are multiple stress assignments, one of the possible stress assigments is much more plausible than the others. For instance, nouns with the pseudo-weight of "H--L* (e.g., difference) have a strong tendency toward antipenultimate stress, even though they could have either 100 or 310 stress depending on the weight of the penultimate. The program takes advantage of this fact by returning a sorted list of solutions, all of which meet the constraints, but the solutions toward the front of the list are deemed more plausible than the solutions toward the rear of the list. (stress-from-weights "l-I--L" I) -- ('100" "3 I0") Sorting the solution space in this way could be thought of as a kind of default reasoning mechanism. That is, the ordering criterion, in effect, assigns the penultimate syllable a default weight of L. unless there is positive evidence to the contrary. Of course, this sorting technique is not as general as an arbitrary default reasoner, but it seems to be general enough for the application. This limited defaulting mechanism is extremely efficient when there are only a few solutions meeting the constraints. This default mechanism is also used to stress the following nouns Hottentot Jackendoff balderdash ampersand Hackensack Arkansas Algernon mackintosh davenport merchandise cavalcade palindrome nightingale Appelbaum Aberdeen misanthrope where the penultimate syllable ends with a sonorant consonant (n. r, t). According to what has been said so far, these sonorant syllables are closed and so the penultimate syllable should be heavy and should therefore be stressed. Of course, these nouns all have antipenultimate stress, so the rules need to be modified. Hayes suggested a Sonorant Dnstressing rule which produced the desired results by erasing the foot structure (destressing) over the penultimate syllable so that later rules will reanalyze the syllable as unstressed. I propose instead to assign these sonorant syllables the pseudo-weight of N which is essentially identical to -.* In this way. all of these words will have the pseudo- weight of HNH which is most likely stressed as 103 (the correct answer) even though 313 also meets the constraints, but fair worse on the ordering criteron. (stress-from-weights "HNH" I) -- ('I03" "313") Contrast the examples above with Adirondack where the stress does not back ap past the sonorant syllable. The ordering criterion is adjusted to produce the desired results in this case, by assuming that two binary feet (i.e., 2010 stress) are more plausible than one tertiary foot (i.e., 0100 stress). (weights-from-orthography "Adirondack') -- "L-NH" (stress-from-weights "L-NH') -- ('2013" "0103") It ought to be possible to adjust the ordering criterion in this way to produce (essentially) the same results as Hayes" rules. tO. M ~ Thus far, the di~-usion has assumed monomorphemic input. Morphological affixes add yet another rich set of constraints. Recall the examples mentioned in the abstract, degrhde/dlrgrudhtion and tklegruphkei~grophy, which were used to illustrate that stress alternations are conditioned by morphology. This section will discuss how this is handled in the program. The task is divided into two questions: (I) how to parse the word into morphemes, and (2) how to integrate the morphological parse into the rest of stress assignment procedure discussed above. ~" N s-d - used to I~ idlm"aL I sm -,ill am mm du~ differeeczs us just~'=d. At in,/tram. IU differt~s m~l vm7 ml~ t- aad ¢~rtamly om ~q)rth pin S into h~e. 250 The morphological parser uses a grammar roughly of the form: word -- level3 (regular-inflection)* level3 -- (level3-prefix) * level2 (level3-suffix)* level2 -- (levei2-prefix)* levell (level2-suffix)* levell ~ (levell-profix)* (syl)* (leveli-suffix)* where latinate affixes such as in+. it+, ac+, +ity, +ion. +ire. -al are found at level l, Greek and Germanic al~tes such as hereto#, un#. under#. #hess. #/y are found at level 2, and compounding is found at level 3. The term level refers to Mohanan's theory of Level Ordered Morphology and Phonology [Mohanan] which builds upon a number of well-known differences between + boundary affixes (level I) and # boundary affixes (level 2). • Distributional Evidence: It is common to find a level [ affix inside the scope of a level 2 affix (e.g., nn#in +terned and form +al#ly), but not the other way around (e.g., *in+un#terned and • form#1y +al). • Wordness: Level 2 affixes attach to words, whereas level I affixes may attach to fragments. Thus, for example, in+ and +ai can attach to fragments as in intern and criminal in ways that level 2 cannot *un#tern and *crimin#ness. • Stress Alternations: Stress alternations are found at level I p~rent parent +hi but not at level 2 as demonstrated by parent#hood. Level 2 suffixes are called stress neutral because they do not move stress. • Level I Phonological Rules: Quite a number of phonological rules apply at level I but not at level 2. For instance, the so-called trio syllabic will lax a vowel before a level I suffix (e.g.. divine -- divin+ity) but not before a level 2 suffix (e.g., dcvine#ly and devine#hess). Similarly, the role that maps /t/ into /sd in president ~ pre~dency also fails to apply before a level 2 affix: president#hood (not *presidence#hood). Given evidence such as this, there can be little doubt on the necessity of the level ordering distinction. Level 2 affixes are fairly easy to implement; the parser simply strips off the stress neutral affixes, assigns stress to the parts and then pastes the results back together. For instance, paremhood is parsed into parent and #hood. The pieces are assigned 10 and 3 stress respectively, producing 103 stress when the pieces are recombined. In general, the parsing of level 2 affixes is not very. difficult, though there are some cases where it is very difficult to distinguish between a level I and !evel 2 affix. For example, -able is level 2 in changeable (because of silent • which is not found before level I suffixes), but level I in cbmparable (bocause of the strees shift from compare which is not found before level 2 suffixes). For dealing with a limited number of affixes like .able and -merit, there are a number of special purpose diagnnstic procedures which decide the appropriate level. Level I suffixes have to be strer,,sed differently. In the lexicon, each level I suffix is marked with a weight. Thus, for example, the su~ +~'ty is marked RR. These weights are assigned to the last two syllables, regularless of what would normally be computed. Thus, the word civii+ity is assigned the pseudo-weight ---RR which is then assigned the correct stress by the usual methods: (stress-from-weights "'--RR" 1) -- ('0100" "3100") The fact that +ity is marked for weight in this way makes it relatively easy for the program to determine the location of the primary stress. Shown below are some sample results of the program's ability to assign primary stress.* % Correct Number of Level 1 Primary Stress Words Tested Suffix 0.98 726 +ity 0.98 1652 +ion 0.97 345 +ium 0.97 136 +ular 0.97 339 +icai 0.97 236 +cons 0.97 33 +ization 0.98 160 +aceeus 0.97 215 +ions 0.96 151 +osis 0.96 26 i 7 +ic 0.96 364 +ial 0.96 169 +meter 0.95 6 i 7 +inn 0.95 122 +ify 0.94 17 +bly 0.94 17 +logist 0.94 313 +ish 0.93 56 +istic 0.92 2626 +on 0.92 24 +ionary 0.90 19 +icize 0.88 52 +ency 0.82 1818 +al 0.77 128 +atory 0.77 529 +able These selected results are biased slightly in favor of the program. Over all, the program correctly assigns primary stress to 82% of the words in the dictionary, and 85% for words ending with a level I affix. Prefixes are more difficult than suffixes. Examples such as super +fluou~ (levell 1), s;,per#conducwr (level 2), and sr, per##market (level 3) illustrate just how difficult it is to assign the prefix to the correct level. Even with the correct parse, it not a simple matter to assign stress. In general, level 2 pretixes are stressed like compounds, assigning primary stress to the left morpheme (e.g., ¢,ndercarriage) for nouns and to the right for verbs (e.g., undergb) and adjectives (e.g., ;,ltracons~rvative), though there seem to be two classes of excentions. First. in technical terms, under certain conditions • Stria M ~ as izatma, acl~lur, lo~rt are really seqm:aces o( se,,erat at~xes. In order tO avoid some difficult psrun| ~ I da:ided not to allow more than one level I sm~a par ward. This limitinuGa requires that [ enter ~u~ of Icv©l I sut~x~ into the I m 251 [Hayes. pp. 307-309]. primary stress can back up onto the prefix: (e.g., telegraphy). Secondly, certain level 1 suffixes such as +ity seem to induce a remarkable stress shift (e.g., sfiper#conductor and si~per#conductDity), in violation of level ordering as far as I can see. For level 1 suffutes, the program assumes the prefixes are marked light and that they are extrametricai in verbs, but not in nouns. Prefix extrametrieality accounts for the well-known alternation p~rmit (noun) versus permlt (verb). Both have L- weight (recall the prefix is L)o but the noun has initial struts since the final syllable is extrametrical ~hereas the verb has final stress since the initial syllable is extrametrical. Extrametricality is required here, __hec:_use otherwise both the noun and verb would receive initial stress. tt. Ety=aetn The stress rules outlined above work very well for the bulk of the language, but they do have difficulties with certain loan words. For instance, consider the Italian word tort6nL By the reasoning outlined above, tortbni ought to stress like c;,lcuii since both words have the same part of speech and the same syllable weights, but obviously, it doesn't. In tact. almost all Italian loan words have penultimate stress, as illustrated by the Italian surnames: Aldrigh~ttL Angel~tti. Beli&ti. /ann~cci. Ita[ihno. Lombardlno. Marci~no. Marcbni. Morillo. Oliv~ttL It is clear from examples such as these that the stress of Italian loans is not dependent upon the weight of the penultimate syllable, unlike the stress of native English words. Japanese loan words are perhaps even more striking in this respect. They too have a very strong tendency toward penultimate stress when (mis)pronounced by English speakers: Asah&a. Enom•o. Fujimhki. Fujim&o. Fujim;,ru. Funasl, ka, Toybta. Um~da. One might expect that a loan word would be stressed using either the rules of the the language that it was borrowed from or the rules of the language that it was borrowed into. But neither the rules of Japanese nor the rules of English can account for the penultimate stress in Japanese loans. I believe that speakers of English adopt what i like m call a pseudo- foreign accent. That is. when speakers want to communciate that a word is non-native, they modify certain parameters of the English stress rules in simple ways that produce bizarre "foreign sounding" outputs. Thus, if an English speaker wants to indicate that a word is Japanese, he might adopt a pseudo-Japanese accent that marks all syllables heavy regnardless of their shape. Thus, Fujimfira, on this account, would be assigned penultimate stress because it is noun and the penultimate syllable is heavy. Of course there are numerous alternative pseudo-Japanese accents that also produce the observed penultimate stress. The current version of the program assumes that Japanese loans have light syllables and no extrametricality. At the present time, I have no arguments for deciding between these two alternative pseudo-Japanese accents. The pseudo-accent approach presupposes that there is a method for distinguishing native from non-native words, and for identifying the etymological distinctions required for selecting the appropriate pseudo-accent. Ideally, this decision would make use of a number of phonotactic and morphological cues, such as the fact that Japanese has extremely restricted inventory of syllables and that Germanic makes heavy use of morphemes such as .berg, wein. and .stein. Unfortunately, because I haven't had the time to develop the right model, the relavant etymological distinctions are currently decided by a statistical tri-gram model. Using a number of training sets (gathered from the telephone book, computer readable dictionaries, bibliographies, and so forth), one for each etymological distinction. I estimated a probability P(xyz~e) that each three letter sequence xyz is associated with etymology e. Then. when the program sees a new word w, a straightforward Baysian argument is applied in order to estimate for each etymology a probability P(eb*) based on the three letter sequences in w. I have only just begun to collect training sets, but already the results appear promising. Probability estimates are shown in the figure below for some common names whose etymology most readers probably know. The current set of etymologies are: Old French (OF). Old English (OE), International Scientific Vocabulary (ISV), Middle g~e~o~ Acesta Aivarado Alvarez Andersen Beauchamp Bornstein Calhoun Callahan Camacha Camero Campbell Castello Castillo Castro Cavanaugh Chamberlain Chambers Champion Chandler Chavez Christensen Christian Christian~-n Churchill Faust Feticiano Fernandez Ferrnra Ferrell Raherty Flanagan Fuchs Gallagher Gallo Galloway Garcia from Orthography 0.96 SRom 0,92 SRom, 0.08 1,00 SRom 0.95 Swed 0.47 MF 0.45 1.00 Ger 1.00 NBrit 1.00 N Brit 0.89 SRom 0.77 SRom 0.18 1.00 N Brit 1.00 SRom 1.00 SRom 0.73 SRom 0,17 1.00 NBrit 0.86 OF O. 13 0.37 Core 0.3 l 0.73 OF 0.20 0.41 OF 0.25 1.00 SRom 0.74 Swed 0. 1.5 0.63 Core 0.25 0.gl Swed 0.I0 0.62 OE 0.17 0.40 Gcr 0.38 1.00 SRom 1.00 SRom 0.79 SRom 0.17 0.73 SRom 0.08 1.00 NBrit 0.97 NBrit 1.00 Get 0.67 NBrit 0.33 1.00 SRom I 0.65 OF 0.19 0.95 SRom OF L MF MF MF ME Get Swed Core Core OF L ME SRom ME 252 French (MF). Middle English (ME). Latin (L). Gaelic (NBrit). French (Fr). Core (Core). Swedish (Swed). Ru~lan (Rus). Japanese (Jap). Germanic (Get), and Southern Romance (SRom). Only the top two candidates are shown and only if the probability estimate is 0.05 or better. As is to be expected, the model is relatively good at fitting the training data. For example, the following names selected from the training data where run through the model and assigned the label Jap with probability 1.00: Fujimaki, Fujimoto. Fujimura. Fujino. Fujioka. Fujisaki. Fujita, Fujiwara. Fukada. Fukm'. Fukanaga. Fukano. Fukase. Fukuchi. Fukuda. Fukuhara. Fukui. Fukuoka. FukusMma. Fukutake. Funokubo, Funosaka. Of 1238 names on the Japanese training list, only 48 are incorrectly identified by the model: Abe. Amemiya. Ando. Aya. Baba. Banno. Chino. Denda. Doke. Oamo. Hose. Huke. id¢. lse. Kume. ICuze. Mano. Maruko. Marumo. Mosuko. Mine. Musha. Mutai. Nose. Onoe. Ooe, Osa. Ose. Rai. Sano. gone. Tabe. Tako. Tarucha. Uo. Utena. Wada and Yawata. As these exceptions demonstrate, the model has relatively more difficulty with short names, for the obvious reason that short names have fewer tri- grams to base the decision on. Perhaps short names should be dealt with in some other way (e.g.. an exception dictionary). I expect the model to improve as the training sets are enlarged. It is not out of the question that it might be possible to train the model on a very large number of names, so that there is a relatively small probability that the program will be asked to estimate the etymology of a name that was not in one of the training sets. If. for example, the training sets included the I00OO must frequent names, then mint of the names the program would be asked about would probably be in one the training sets (assuming that the results reported above for the telephone directories also apply here). Before concluding. I would like to point out that etymology is not just used for stress assignment. Note. for instance, that orthographic ch and gh are hard in Italian loans Macchi and spaghetti, in constrast to the general pattern where ch is /ch/ and /ghJ is silent. In general. velar softening seems to be cooditionalized by etymology. Thus, for er, ample" /g/ is usually soft before /I/ (as in ginger) but not in girl and Gibson and many other Germanic words. Similarly. other phonological rules (especially vowel shift) seem to be conditionalized by etymology. [ hope to include these topics in a longer version of this paper to be written this summer. 12. Cmc~l~t Remarks Stress assignment was formulated in terms of Waltz' constraint propagation paradigm, where syllable weight played the role of Waltz' • labels and Sproat's weight table played the role of Waltz' vertex constraints. It was argued that this formalism provided a clean computational framework for dealing with the following four linguistic issues: • Syllable Weight:. oh@ /deviffop * Part of Speech:. t~rment (n) / torment (v) • M e ~ . degrhde /dbgradhtion • Etymo/o~: c/'lculi I tortbni Currently. the program correctly assigns primary streets to 82% of the words in the diotionary. Refm Chomsky. N.. and Halle, M., The Sound Pattern of English. Harper and Row, 1968. Hayes. B. P., A Metrical Theory of Stress Rules, unpublished Ph.D. thesis, MIT. Cambridge. MA., 1980. Liberman, L., and Prince, A.. On Stress and Linguistic Rhythm, Linguistic inquiry 8, pp. 249-336, 1977. Mohanan. K., lacxical Phonology, MIT Doctoral Dissertation. available for the Indiana University Linguistics Club. 1982. Waltz. D., Understanding Line Drawings of Scences with Shadows. in P. Winston (ed.) The Psychology of Computer Vision, McGraw-Hill. NY, 1975. 253
1985
30
AN ECLECTIC APPROACH TO BUILDING NATURAL LANGUAGE INTERFACES Brian Phillips. Michael J. Freiling, James H. Alexander, Steven L. M essick, Steve Rehfu~, Sheldon N ichollt Tektronix, Inc. P.O. Box 500, M/S 50-662 Beavertoa, OR 97077 ABSTRACT INKA is a natural language interface to facilitate knowledge acquisition during expert system development for electronic instrument trouble-thooting. The expert system design methodology develops a domain definition, called GLIB, in the form of a semantic grammar. This grammar for- mat enables GLIB to be used with the INGLISH interface, which constrains users to create statements within a subset of English. Incremental patting in INGLISH allows immediate remedial information to be generated if a user deviates from the sublanguage. Sentences are translated into production rules using the methodology of lexical-functional grammar. The sys- tem is written in Sms/ltalk and, in INK,A, produces rides for a Prolog inference engine. INTRODUCTION The ides/ natural language interface would let any user, without any prior training, interact with a computer. Such an interface would be useful in the knowledge acquisition phase of expert system development where the diagnostic knowledge of a Hilled practitioner has to be elicited. As technicians are not farnifiar with formal knowledge representation schemes, a trained intermediary, a knowledge engineer, is generally employed to handcraft the interns/ format. This process is time-consuming and expensive. INKA (INglish Knowledge Acquisition) permits task experts to express their knowledge in a subset of English, and have it automatically translated into the appropriate represen- tational formalism. In particular, the version of INKA to be discussed here accepts input in a sublanguage called GLIB which permits the statement of facts and rules relevant to the troubleshooting of electronic systems (Freiling et al., 1984), and translates these statements into Prolog unit clauses for later proce~ng by a specialized inference mechanism. Experiments with INKA to date have enabled us to construct mfflcient troubleshooting rules to build a localizing troubleshooter for a simple circuit. INKA is designed as one of the tools of DETEKTR, an environment for building knowledge based electronic instru. ment troubleshooters (Freiling & Alexander, 1984). DETEKTR supports an expert system development methodol- ogy which is outlined below. The design goal of INKA is that it serve as a natural language input system to facilitate transfer of knowledge during the knowledge acquisition phase of expert system development. IIqKA is not intended to stand alone as the sole mechanism for knowledge transfer, but to be sup- t A summer intern at Tektronix, currently at the Univer- sity of llfinois, Champs/gn-Urbana. ported by components capable of managing a coherent dis/o- gue with the task expert. McKeown (1984) has po/nted out a number of important aspec~ of the pragmatics that relate to the usage phase of an expert system. Similar pragmatics are required to insure adequate construction of the system's knowledge base during the knowledge ac~n phase of an expert system's development. The most important pragmatic facility is one to estimate the degree of epistemi¢ coverage of the knowledge acquired so far, and to prompt the task expert for more knowledge in areas where the coverage is weak. It is unfeasible to assume that any task expert can simply perform a ~memory dump" of expertise into some natural language interface and be done with it. This paper discusses the natural language technology used in building INKA. The system incorporates a diverse collec- tion of natural language technologies in its construction. Specifically, INKA utilizes a semam/c grammar (Burton, 1976) to characterize the domain sublanguage, lexical-functional sem~aics (Kaplan & Bresnan, 1982) to translate to the internal form of representation, and an interface that includes left- corner parsitlg with in-line guidance to address the Linguistic coverage problem that aris~ with sublanguages. We feel this eclectic approach is a useful for building application-oriented natural language interfaces. Although we are describing a knowledge acquisition application, the methodology can be used for any application whose sublanguage can be stated in the prescribed grammar formalism. Tereisias (Davis, 1977) provides a natural language environment for debugging a knowledge base. INKA at present contains no facilities to modify an existing rule or to test the evolving knowledge base for some level of integr/ty; these are to be future additions. INKA is written in Smalltalk (Goidberg & Robson, 1983) and runs on both the Tekuroulx Magnolia Workstation and the 4404 Artificial Intelligence System. INKA makes extensive use of the bit-mapped display and three-button mouse on these systems. LANGUAGE AS A KNOWLEDGE ENGINEERING TOOL The major bottlenecks in building knowledge based sys- tems have proven to be related to the definition and acquisi- tion of knowledge to be processed. The first bottleneck occurs in the knowledge definition phase of system development, where symbolic structures are defined that represent the knowledge necessary to accomplish a particular task. A bottleneck arises because of the ~ortage of knowledge engineers, who are skilled in defining these struc- tures and using them to express relevant knowledge. 254 The second bottleneck occurs in the knowledge acquisition phase, which involves the codification of the knowledge neces- sary for a system to function correctly. A bottleneck arises here because in current practice, the presence of the knowledge engineer is required throughout this time- consuming process. In the course of defining a viable methodology for the construction of expert systems (Frelling & Alexander 1984; Alexander et al. 1985), we have identified cermia classes of problems where the task of definin$ the knowledge structures and the task of actually building them can be effectively separated, with only the former being performed by a trained knowledge engineer. The problem of building a large collec- tion of knowledge-based troubleshooters for electronic instru- meats is an example. In order to support the construct/on of a large class of such systems, it makes sense to perform the knowledge definition step for the overall domain initially, and to build domain-specific developmera tools, which include problem-oriented mbsets of Enghsh and special purpose graph- ical displays, that can be reused in the development of each individual knowledge-based system. Even in the context of such an approach, we have found that there is usually a shortage of capable knowledge engineers to carry out the knowledge deflnltioa phase, and that a well- defined methodology can be of great value here in aiding non- linguistically oriented computer scientists to carry out this ver- bal elicitation task. The major issue is how to gee started defining the forms into which knowledge is to be cast. We have found it an effect/ve technique tO begin this pro- cem by recording statements made by task experts on tape, and transcribing these to fairly natural En~)i~. When enough recording has been done, the statements begin to take on recognizable patterns. It is then pom/ble to build a formal grammar for much of the relevant utterances, using linguistic engineering techniques such as semantic grammars The sym- bols of this grammar and the task specific vocabulary provide convenient points for defining formal sub-structures, which are pieced together to define a complete symbolic representation. Once the grammar is reasonably well-defined, the mapping to symbolic representation can be carried out with mapping tenh- niques such as the f-structure constraints of lexical-fuactioaal grammar. Up to this point, we can imagine that the entire task has been carried out on paper, or some machine-readable equivalent. Even in such a rudimentary form, the exercise is useful, because it provides a conveniently formal documenta- tion for the knowledge representation decisions that have been made. However, it is also the case that these formal defini- tions, if appropriately constructed, provide all that is necessary to construct a problem specific interface for acquiring utter- antes expressed in this sublanguage. In fact, the idea of using this technique to build acquisition interfaces, using INGLISH, actually occurred as a result of wondering what to do with a grammar we had constructed simply in order to document our representation structures (Freiling et al. 1984). We do not intend to imply that it is possible in complex knowledge based system applications to simply build a gram. mar and immediately begin acquirin~ knowledge. Often the process leading to construction of the grammar can be quite complex. In our case, it even involved building a simple proto- type troubleshooting system before we had gained sufficient confidence in our representation structures to attempt a knowledge acquis/tion interface. Nor do we intend to claim that all the knowledge neces- sary to build a complete expert system need be computed in this fashion. Systems such as INKA can be justified on an economic bash if they make pom/ble only the transfer of a ~'~ nificam fraction of the relevant knowledge. GLIB - A PROBLEM SPECHrIC SUBLANGUAGE The knowledge acquisition language developed for elec- tron/c devine troubleshooting is called GLIB (General Language for Insumneat Behavior), and is aimed primarily at describing observations of the static and dynamic behavior of electrical signals as measured with oscilloscopes, voltmeters, and other standard electronic test instruments (Freiling et al. 1984). The grammatical structure of GLIB is that of a seman- tic grammar, where non-terminal symbols represent units of interest to the problem domain rather than recognizable linguistic categories. This semantic grammar formalism is an important part of the DETEKTR methodology because the construction of semantic grammars is a technique that is easily learned by the apprentice knowledge engineer. It also ma~es possible the establishment of very strong constraints on the formal language developed by this process. Two of the design constraints we find it advisable to impose are that the language be unambigu- ous (in the formal sense of a unique derivation for each legal sentence) and that it be context-free. These constraints, as will be seen, make pom/ble features of the interface which cannot normally be delivered in other contexts, such as menus from which to select all legal next terminal tokens. While increasing complexity of the acquisition sublanguage may make these goals unfeas/ble past a certain point, in simple systems they are features to be cherished. Figure I shows a fragment of the GLIB grammar. In the DETEKTR version of INKA, sentences in this language are accepted, and mapped into Proiog terms for proceming by a Prolog based diagnostic inference engine. At present, the eric/- ration is unguided: responsibility res/des with the user to ensure that all relevant statements are generated. We are still studying the issues involved ia determining completeness of a knowledge base and assimilating new knowledge. One out- come of these studies should be means of guiding the user to areas of the knowledge base that are incomplete and warrant further elaboration. Future enhancements to the system will include explanation and modification facilities, so that knowledge may be added or changed after testing the infer- ence engine. THE NATURAL LANGUAGE INTERFACE DESIGN INGLISH - INterface enGLISH (Ph/Ilips & Nicholl, 1984) - allows a user to create sentences either by menu selec- tion, by typing, or by a mixture of the two. This allows the self-paced transition from menu-driven to a typed mode of interact/on. In-line help is available. To assist the v/pist, automatic spelling correction, word completion, and automatic phrase completion are provided. INGLISH constrains users to create statements within a subset of English, here GLIB. A statement can be entered as a sequence of menu- selections using only the mouse. A mouse-click brings up a menu of words and phrases that are valid extensions of the 255 <:rttl*'~> ::I= IF <condition> THEN <¢on¢lmma> <condifiou> ::', <¢otltl=n independeln predicate> I <context independent predicate> WHEN ~'-.m~-tund coatext> <conclusion> ::!, <fuectionaJ context> <atonfi¢ funct~nal context> ::- <device> HAS FAILED I <device> B OK < f ~ conner> ::1. <atomic functional context> ! <atomic functional context> AND <functional context> I <atomic functio~taJ context> OR <f,,r~tionaI context> <atOtUiC stt~tetugaJ contexL> ::~, <device> IS REMOVED ~-JtfttCtttt~l COtlteXt> ::1= <atomi¢ structm'aJ context> I <atomic structural context> AND <structural context> <context independent prostate> ::= <value predicate> <value predicatc> ::= <value expre~on> IS <value expreslion> I <value expt~mou> <comparator> <value c~im:smon> <coml~tralOf> ::~ IS EQUAL TO I = I IS GREATER THAN I > I IS LESS THAN I < ! IS LESS THAN OR EQUAL TO I <= I IS GREATER THAN OR EQUAL TO I >- I IS NOT EQUAL TO I !:, Figure 1: A fragment of GLIB current sentence fragment. Once a selection is made from the menu using the mouse, the fragment is extended. This sequence can be repeated until the sentence is completed. Creating a sentence in this manner compares with the NLMENU system (Tennant etal., 1983). Unlike NLMENU, keyboard entry is also possible with IHGLISH. Gilfoil (1982) found that users prefer a command form of entry to menu- driven dialogue as their experience increases. When typing, a user who is unsure of the coverage can invoke a menu, either by a mouse-click or by typing a second space character, to find out what INGLISH expects next without aborting the current statement. Similarly, any unacceptable word causes the menu to appear, giving immediate feedback of a deviation and suggestions for correct continuation. A choice from the menu can be typed or selected u~ng the mouse. |NGLISH in fact allows all actions to be performed from the keyboard or with the mouse and for them to be freely intermingled. As only valid words are accepted, all completed sentences are well- formed and can be translated into the internal representation. Figure 5, in the "INGLISH" window, shows a complete sentence and its translation, and a partial sentence with a menu of continuations. The numbers associated with each menu item provide a shorthand for entry, i.e., "~12" can be typed instead of "RESISTANCE". As menu entries can be phrases, this can save significant typing effort. Input is processed on a word-by-word basis. Single spaces and punctuation characters serve as word terminators. Words are echoed as typed and overwritten in uppercase when accepted. Thus, if lowercase is used for typing, the progress of the sentence is easily followed. An invalid entry remains visi- ble along with the menu of acceptable continuations then is replaced when a selection is made. The spelling corrector (a Smalltalk system routine is used) only corrects to words that would be acceptable in the current syntactic/semantic context. As Carbonell and Hayes (1983) point out, this is more efficient and accurate than attempting to correct against the whole application dictionary. Word completion is provided with the "escape" character (cf. DEC, 1971). When this is used, INGLISH attempts to complete the word on the basis of the characters so far typed. If there are several possibilities, they are displayed in a menu. Automatic phrase completion occurs whenever the con- text permits no choice. The completion will extend as far as poss/ble In an extreme case a dngle word could yield a whole sentence! The system will "soak-up" any words in the comple- tion that have also been typed. The spelling cot'rector and automatic phrase completion can interact in a disturbing manner. Any word that is outside the coverage will be treated ~s an error and an attempt will be made to correct it. If there [s a viable correction, it will be made. Should phrase completion then be possible, a portion of a sentence could be constructed that is quite different from the one intended by the user. Such behavior will probably be less evident in large gramman. Nevertheless, it may be necessary to have a "cautious" and "trusting" mode, as in Interlisp's DWIM (Xerox, 1983), for users who resent the precocious impat/ence of the interface. The system does not support anaphora, and ellipsis is offe:ed indirectly. The interface has two modes: "ENTRY" and "EDIT" (Figure 5). These are selected by clicking the mouse while in the pane at the top right of the interface win- dow. Rules are normally entered in the Enter mode. When in Edit mode, the window gives access to the SmalltaLk editor. This allows any text in the window to be modified to create a new statement. After editing, a menu command is used to pass the sentence to the paner as if it were being typed. Any errc;" in the constructed sentence causes a remedial menu to be displayed and the tail of the edited sentence to be thrown away. The 1HGLISH interface alleviates the problem of linguis- tic coverage for designers and users of natural language inter- faces. A natural language interface user composes his entries bearing in mind a model of the interface's capabilities. If his model is not accurate, his interactions will be error-prone. He may excerd the coverage of the system and have his entry rejected. If this happens frequently, use of the interface may be abandoned in frustration. On the other hand he may form an overly conservative model of the system and fail to ur~ize the full capabifities of the interface (Tennant, 1980). An inter- face designer is confronted by many linguistic phenomena, e.g., noun groups, retative rlauses, ambiguity, reference, ellipsis, anaphora, and paraphrases. On account of perfor- mance requirements or on a lack of a theoretical understand- ing, many of these constructions will not be in the interface. INGLISH allows designers to rest more comfortably with the compromises they have made, knowing that users can sys- tematically discover the coverage of the interface. 256 THE IMPLEMENTATION OF INGLISH INGLISH parses incrementally from left to right and per- forms all checking on each word as it is entered. The parser follows the Left-Corner Algorithm (Gr/ffiths & Petrick, 1965), modified to a pseudo-parallel format so that it can follow all parses simultaneously (Phillips, 1984). Th/s algorithm builds phrases bottom-up from the left-comer, i.e., rules are selected by the first symbol of their r/ght-hand-s/des. For example, given a phrase initial category e, a rule of the form X --e -- will be chosen. The remaining rule segments of the right-hand s/de are predictions about the structure of the remainder of the phrase and are processed left-to-right. Subsequent inputs will directly match success/ve rule segments ff the latter are term/- aal symbols of the grammar. When a non-terminal symbol is encountered, a subparse is initiated. The subparse is also con- structed bottom-up from the left-corner, following the rule selection process just described. When an embedded rule is completed, the phrase formed may have the structure of the non-terminal category that or/ginated the subparse and so com- plete the subparse. If there is no match, it will become the left-corner of a phrase that will eventually match the originat- ing category. The parser includes a Re,whabiliry Mmriz (Griffiths & Petrick, 1965) to provide top-down filtering of rule selection. The mntrix indicates when a category A can have a category B as a left-most descendant in a passe tree. The matrix is static and can be derived from the grammar in advance of any pan. ing. It is computable as the transitive closure under multiplica- tion of the boolean matrix of left daughters of non-terminal categories in the grammar. It is used as a further constraint on rule selection. For example, when the goal is to construct a sentence and the category of the lust word of input is e, then rule selection, giving X - c --, will also be constrained to have the property S * X -- The filtering is applicable whenever a rule is selected: during subparses the constraint is to reach the category originating the subparse. A semantic grammar formalism is used in INGLISH, which make the grammar application dependent. As was men- tioned earlier, this format was independently chosen as pan of the knowledge engineering methodology for describing the avplication domain. The rationale for the choice for INGLISH was that the simultaneous syntactic and semantic checking assists in achieving real-time processing. A fragment of the grammar is shown in Figure 1. Pre-processing on the grammar coasu'uc:s the terminal and non-terminal vocabularies of the grammar, the reachabllity matrix, and an inverse dictionary. The set of all possible initia/ words and phrases for sentences can also be precomputed. The Smafltalk system contnin~ controllers that manage activity on a variety of input devices and from these a con- troller was readily constructed" to coordinate mouse and key- • Smalltalk is an object-oriented language. Instead of creating a procedure that controls system operation, the user creates an object (usually a data structure), and a set of methods (operations that transform, and commun- icate with the object). Smalitalk programs create objects or send messages to other objects. Once received, mes- sages result in the execution of a method. Programmers do not create each object and its methods individually. Instead, classes of objects are de- board activity in INGLISH. Either form of entry increments an intermediate buffer which is inspected by the parser. When a complete word is found in the buffer it is parsed. Every phra~ in an on-going analys/s is contained in a Smalltalk object. The final parse is a tree of objects. The intermediate state of a parse is represented by a set of objects containing partially instantiated phrases. After the first word has established an initial set of phrase objects, they are Dolled by the pa~er for their next segments. From these and the rever~; dictionary, a "lookahead dictionary" is estabfished that assoc/ates expected words with the phrasal objects that would accept them. Using this dictionary an incoming word will only be sent to those ob~'ts that will accept it. If the word in not in the set of expected words, the dict/onary keys sre used to attempt spelling correction and, iI correction fails, to make the menu to be displayed. If the dictionary contains only a single word, this indicates that automatic phrase completion should take place. A new lookahead dictionary is then formed from the updated phrase objects, and so On. KNOWLEDGE TRANSLATION The internal form of a diagnostic role is a clause in Pro- log. Sentences are translated using functional stigmata, as in lexicai-functioaal grammar. The functional schemata are attached to the phrase structure rules of GLIB (Figure 2). (t F o a ~ 0 COND roItM), 0 CNCI. FORM))> (t COND)-. (t CNCL)-. <ride> -> IF <condition> THEN <conclus/on> (. r-OaMl--<.~.((t a~). (, SYAI"B)}.~ (, ~ * (t SYAI"R)-. <condition> -> <indicator> IS ~ > (. e'OltM)--<.umn((, oev), ~m')> (, OnV)-, <conclus/on>--> <device> HAS FAILED Figure 2: GL/B rules with attached schemata Unlike lex/cal-functional grammar, the schemata do not set up constraint equations as the interface and the semant/c grammar ensure the well-formedne~ and unamhigu/ty of the sentence. As a result, propagation of functional structure is handled very quickly in a post-proce~ng step since the appficable grammati- ca/ rules have already been chosen by the parsing process. Further, by restricting the input to strictly prescribed sub- language GLIB, not Engl~h in general, the Ur~n~Intioa process is s/mplified. fined. A clam definition describes an object and the methods that it understands. Classes are structured h/erarehically, and any class automaticaUy /nherits methods from its superclass. As a result of this hierarchy and code inher/tance, applications may be wr/tten by adap~ng previously con- • strutted code to the ~k at hand. Much of the appUca- t/on code can be inherited from prev/ously defined SmaIitalk code. The programmer need only redefine differences by overriding the inappropriate code with custom/zed code. (Alexander & Freiling, 1985). 257 The parser constzvcts a par~ tree with attached sche- mata, referred to as a constituent-structure, or c-structure. Translation proceeds by instantiatinS the meta-vatiablns of the schemata of the c-structm~ created by INGLISH to form func- tional equations which ate solved to produce a functional struc- ture (f-~e). The final rule form is obtained from the f- structure of the sentence when its sub.structures are recursively trandormed according to the contents of each f-structure. As an example, given the lexical-functioaal form of the semantic grammar in Figure 2 and the following sentence: IF LED-2 IS ON THEN TRANSISTOR-17 HAS FAILED the' c-structure in Figure 3 would be produced. This shows that a rule has a condition part, COND, and a conclus/on part, CNCL, that should become a clausal-form ~Ule(COND, CNCL). ~ The meta-symbol t refers to the parent node and t to the node to which the schema is attached. The final phase of INKA interprets the f-structures to produce Pmlog clauses. All of the information required to produce the clauses is contained in the FORM property in this example. The FORM property is printed, with all variables instantiated, to produce the f'mal rule in the form of a Prolog clause. The f-strucntre of Figure 4 produces the Prolog clause rule(state(led-2, on), ~tatus(transistor-17, failed) KNOWLEDGE USE Translated rules are sent to a diagnostic engine that has been implemented ia Pmiog. The diagnosdc engine uses GLIB statements about the hierarchical structure of the device to build a strategy for successive localization of failures. Start- ing at the highest level ('the circuit" in GLIB terminology), named sub-cimults are examined in turn, and diagnostic rules retrieved to determine correctness or failure of the sub-circuit <rule> IF <condition.:> THEN (taqD~/t ] (t ~nl,A~)~. (t STA~))=~ <indicator> IS <state> LED-2 ON <conclusion> (* FOItMl--qmm~(* bey). (, cev)-. <device>. HAS FAILED I TR.ANSISTOR- 17 Figure 3: C-structure The functional specifications of the example may be solved by instantiating the recta-symbols with actual nodes and assigning properties and values to the nodes according to the specifications. In the example given, most specifications are of the form "(t pmpert'y)=value" where "value" is most often *. This form indicates that the node graphically indicated by t in • the c-structure is the specified property of the parent node (pointed to by *). Specifications are left-= _~:o¢_ lative and have a functional semantic interpretation. A specification of (t COND FORM) refers to the FORM property of the parent node's COND property. The f-~mcture for the example is given in Figure 4. in question. If no specific determination can be made, the sub-circuit is a.mumed to be functioning properly. A sample session including acquisition of a rule and ato- ning of a test diagnosis is shown in Figure 5. The circuit used in this example consists of an oscillator wh/ch drives a light emitting diode (LED-2 in the schematic) and a power supply (LED-1 indicates when the power supply is on). The schematic diagram of the circuit is in the upper pane of the "Insu'ument Data" window; the circuit board layout is in the lower pane. Rules for diagnosing problems in the circuit | IND led-2 COND[ STATE on [ FORM <sta~(O IND FORM), (t STATE FORM))> CNCL [ DEV tr~i,~torol7 t FORM <s~u.¢((, DEV), fa~ed)> t FORM <rule(O COND), (, CNCL))> Figure 4". F-structure 258 GUll Knowledge Acquisition Entry IF NODE 4 VOLTAGE IS EQUAL TO NODE 5 VOLTAGE THEN RESISTOR 2 HAS FAILED . "PARSED rula(comp(aq.voltage(nods(4)).voltags(nods(5 ))).status(componsnt(r ssistor(2))0 f ailed),-) IF lOWER SURLY 1. CURRENT ($3) FREQUENCY (#4) HAS FAILED (#5) IMPEDANCE (#6) IS (ltT~ POWER (#111 RESISTANCE (#12 VOLTAGE (#13) "'ABORT 1#14) Is led number 2 not flashing? yes What is the voltage of node number 2? 15 IS led numl:)ef 1 dim? no Is it true that the voltage of node number 4 is equal to the voltage of node number .5? yes Oscillator number 1 is failing. Resistor number 2 iS failing. Instrument (3ate I. No,. 555 I ~ ~4 .oo~-, ~ . ~ . ~ : I ,J,,,i", ~='' c= I nnnnnnann nnnnnE I l n n n n n n n n n n n ~ ~ ' ', ,. ~',;_~ ¢.V.~ n n n nE IIDl=e~l=l=lil=~~,,,-.-~. ,.-, ' .,..,iill~l=l=ii| I l n n n n n n n ~ n n n n n n n n = ~ . ~ nn n n= ~I=IOIIiiD rrll. l,,~ .... ~,~.'~ = ~ = I= = l= I= n Q"'~ =131=0[ Ilnnnnnnn~ ~n~n~nnnqnvt ~nnn©~l ll=nnnnnni!:-.+~'i. ,!.;'::,,.~ n|i .,..:.~i~n nliq nli=.,'t~= n nn nql llnn anna nilii.;.~ ::~..i.,.,~ n B~,:~.~i~!>.,,~ a nil nil n n n n n n n,l Ilnnnnann~.,'~,~n~nnnluinnn an nnql Figure 5: An []gKA ===/on ('troubleshooting" rules) are added to the system in the win- dow labeled "INGLISH." The interface to the diagnnszi¢ engine is in the "Prolog" window. The "INGLISII" window shows a recently added rule, with its Prolog translation immediately below it. It also shows a partially completed rule along with a menu of acceptable sentence continuations. The user may select one of the menu items (either a word or phrase) to be appendcd to the current sentence. The "Pmlog" window displays the results of a recent test diagnosis. This test was run after the first rule in the ~NGLISH" window was added, but before the addition of the second rule was begun. The last question asked during the diagnosis corresponds to the first rule. Resistor 2, in both the schematic and board diagrams of the =Instrument Data" window, is highlighted as a result of running the diagnos/s: whenever the diagnnstic engine selects a specific component for consideration that component is highlighted on the display. Some 20 statements and rules have been collected '.'or diagnosing the circuit; Figure 6 lists a portion of them with their Prolog translation. 259 THE CIRCUIT CONTAINS OSCILLATOR-1 AND POWERSUPPLY-1. has_cemponent(block(circult), block(oscillator(1))). has_component(block(c/rcuit), block(powetlupply(1))). RESISTOR-1 IS PART OF OSCILLATOR-1. has.xomponent(block(o~-fllator(I)), component(resistor(l))). IF LED-2 IS NOT FLASHING AND THE VOLTAGE OF NODE-2 IS EQUAL TO 15 VOLTS THEN OSCILLATOR-1 HAS FAILED. rule(and(not(state(led(2), flashing))), comp(voltage(node(2)), If)), status(block(oscillator(I)), fa/led), []). IIF [..,ED-,1 IS DIM AND LED-2 IS OFF THEN ~ISTOR-1 HAS FAILED. rule(and(state(led(l), dim), state(led(2), off)), status(component(resistor(1)), failed), []). Figure 6: GLIB rules with Pmlog translations DISCUSSION Informal observations show that subjects generally need only a few minutes of instruction to start using INGLISH. Ini- tially there is a preference to tt~ the mouse to explore the cov- erage and then to begin to incorporate some typing. We have not had any long-term use~ to observe their trends. Use~ could react negatively to limited language systems; even when the coverage is well-engineered users will occasion- ally encounter the boundaries. Fortunately, Headier & Michaelis (1983) found that subjects were able to adapt to lim- ited language systems. INGLISH does not let the designer off the hookl A umr can still have a statement in mind and not easily f'md a way to expre~ it through the grammar. Diligent engineering is still needed to prepare a grammar that will allow a user to be guided to a paraphrase of his or/ginal thoughL Nevertheless, the grammar design problem is simplified: when guidance is provided fewer paraphrases need be incorporated. The use of a semantic grammar to define the fragment of English to be processed does impose limitations on the com- plexity of acceptable input. In the INKA system as it is currently cen.mxtcted, however, them are two distinct ways in which the semantic correctness of an input can be enforced, tint in the parsing of the of the semantically ceRstralned grammar, and second in the tran.qat/on process, as the ftmc- donal structures are built. In short, the our approach to building practical natural language inte~.-ees does not depend on a semantic grammar to ¢oastra/n input. In the future we intend to explore the u~ of a wider class of grammars that include a domain-independent kernel and a domain-specific component, like GLIB. In this approach we are in substantial agreement with Winograd (1984) who advocates a similar approach as an effective diroc. finn for further naturul language resea~h. REFERENCES Alexander, J.H., & Freiling, MJ. Building an Expert System in SmalRalk-80 (R). Systems and Software, 1985, 4, 111-118, Alexander, J.H., Freiling, MJ., Messick, S.L., & Reh/uss, S. Efficient Expert System Development Through Domain- Specific Tools. Proceedings of the F~fth International WorkJhop on Expert Systems and their Applications, Avignon, France, Burton, R.R. Semamic Grammar: los Eng~ncering Tecb, ni~ for Consmac:ing Natural lamgaage UnderstmTding System.v (Techni. caJ Report No. 3453). Cambridge, MA: Bolt, Beranek, & Newman Inc., 1976. Carbonell, J.G., & Hayes, PJ. Recovery Strate~es for Pars- ing Extragrammatical Language. American Journal of Computa- tional Linguimics, 1983, 3-4, 123.146. Davis, R. Interactive Transfer of Expertise: Acquisition of New Inference Rules. Proceedings of the Fifth International Joint Colrference on Art~iciai intelligence, Cambridge, MA, 1977, 321-328. [DEC] TOPS-20 Reference MammL Maynard, MAt Digital Equipment Corporation, 1971. Freiling, MJ., & Alexander, J.H. Diagrams and Grammar: Tools for the Mass Production of Expert Systems. 1EEE First Conference on Ar~ficial Intelligence Applications. Denver, Colorado, 1984. Freiiing, M., Alexander, J., Feucht, D., & Stubbs, D. GL/B - A LAnguage for gepreeentmg the Behavior of Electronic Devices (Technical Report CR-&t-12). Beaverton, OR: Tektronix, Inc., 1984. Gilfoil, D.M. Warming up to Computers: A Study of Cogni- five and Affective Interaction over Time. Proceedings of the Haman Fncterx in Computer 5y~ema Conference, Gaithersburg, MD, 1982, 245-250. Goldberg, A. & Robson, D. Smalltaik 80: The l,a~guage and its lmpiemamtmiom. Re-dlng, MA: Addison-Wesley, 1983. 260 Griffiths, T., & Petr/ck, $.R. "On the relative efficiency of coatext.free grammar recoe, niT~ru. ° Comm. ACM, 1965, 8, 289-300. Headier, J.A., & Michnefis, P.R. The Effects of Limited Grammar on Interactive Natural Language. ProceEdings of tha Human Factors in Computer Systems Conference, Bo~a, MA, 1983, 190.192. Kaplan, R.M., & Bre, mnn, J.W. Lex/cal-Funct/onal Grammar:. A Formal System for Grammatical Representat/oa. In J. Brecmm (Ed.), T~ Ment~ Representation of O r ~ Rein. r/ore. Cambridge, MA: MIT Press, 1982. McKeowa, K.R. Natural Language for Expert Sy~ems: Corn- par/son with databa~ systems. Proceedings of tha International Conference on Computational l, in~rics, Stanford, CA, 1984, 190-193. Phillips, B. Aa Ob~'t-or/ented Parser. la B.G. Barn & G. Gnida (Eds.), Computational Models of Natara~ Langaage Pro. ¢Lu/nz. Amsterdam: North-Holland, 1984. Phillips, B., & NichoH, S. INGUSH: A Nanwal Language Inter. face (Techn/cal Report CR-84-27). Beaverton, OR: Tek- tronix, Inc., 1984. Tennant, H.R. Evaluation of Natural Language Processors (Technical Report 1"-103). Coordinated S¢/eace Laboratory, University of Illinois, Urbana, IL, 1980. Tennaat, H.R., Ross, K.M., & Thompson, C.W. Usable Natural Language Interfaces Through Menu-Based Namra/ Language Understanding. Proceedings of the Human Factors in Computer ~,, ystem.t Conference, Boston, MA, 1983, 190-192. Winograd, T. Mov/ng the Semans/¢ Fu/o'um (Techn/cal Report 84-17). Center for the Study of Language and laformat/an, Stanford, CA, 1984. [Xerox] Interlixp Reference Manual Palo Alto, CA: Xerox Palo Alto Research Center, 1983. 261
1985
31
Structure-Sharing in Lexical Representation Daniel Fllekinger, Carl Pollard, and Thomas Wasow Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA.. 94~O3, USA Abstract The lexicon now plays a central role in our imple- mentation of a Head-driven Phrase Structure Grammar (HPSG), given the massive relocation into the lexicon of linguistic information that was carried by the phrase structure rules in the old GPSG system. HPSG's gram- max contains fewer tha4z twenty (very general) rules; its predecessor required over 350 to achieve roughly the same coverage. This simplification of the gram- max is made possible by an enrichment of the structure and content of lexical entries, using both inhcrit~nce mechanisms and lexical rules to represent thc linguis- tic information in a general and efficient form. We will argue that our mechanisms for structure-sharing not only provide the ability to express important linguistic generalizations about the lexicon, but also make possi- ble an efficient, readily modifiable implementation that we find quite adequate for continuing development of a large natural language system. 1. Introduction The project we refer to as HPSG is the current phase of an ongoing effort at Hewiett-Pa~',axd Labo- ratories to develop an English language understanding system which implements current work in theoretical linguistics. Incorporating innovations in the areas of lexicon, grammar, parser, and semantics, HPSG is the successor to the GPSG system reported on at the 1982 ACL meeting z Like the GPSG system, the current im- plementation is based on the linguistic theory known Generalized Phrase Structure Grammar, 2 though in- corporating insights from Carl Pollard's recent work on Head Grammars ~ which lead us to employ a richer lex- icon and a significantly smaller grammar. We report here on the structure of our lexicon, the mechanisms used in its representation, and the resulting sharp de- cre~e in the number of phrase structure rules needed. 4 I Gawron, et al. (1982). 2 Gazdax, Klein, Puilum, and Sag (1985). 3 Pollard (1984). 2. Mechanisms employed We employ three types of mechanisms for structure- sharing in our representation of the lexicon for the I-IPSG system: inheritance, lexical rules, and an Ol>- eration to create nouns from ordinary database enti- ties. In order to present a detailed description of these mechanisms, we offer a brief sketch of the representa- tion language in which the lexicon is constructed. This language is a descendant of FRL and is currently under development at HP Labs. s Those readers familiar with frame-based knowledge representation will not need the review provided in the next section. 2.1. The representation language The basic data structures of the representation lan- guage axe frames with slots, superficially analogous to Pascal records with fields. However, frames axe linked together by mew of class and inheritance links, such that when a particular frame F0 is an instance or sub- class of a more general frame F1, information stored in F1 can be considered part of the description of F0. For example, our lexicon database contains frames specify- ing properties of classes of words, such as the VERB class, which numbers among its subclasses BASE and FINITE. Having specified on the VERB class frame that all verbs have the value V for the MAJOR fea- ture, this value does not have to be stipulated again on each of the subclasses, since the information will be inherited by each subclass. This class linkage is transi- tive, so information can be inherited through any num- ber of intermediate frames. Thus any instance of the FINITE class will inherit the value FINITE for the fea- ture FORM directly from the FINITE class frame, and will also inherit the value V for the MAJOR feature indirectly from the VERB class frame. 4 Significant contributions to the basic design of this lex- icon were made by Jean Mark Gawron and Elizabeth Ann Panlson, members of the Natural Language Project when the work on HPSG was begun in 1983. We axe also in- debted to Geoffrey K. Pullum, a consultant on the project, for valuable a.~istaJnce in the writing of this paper. s For a description of this language, see Rosenberg (1983). 262 Of course, to make good use of this information, one must be able to exercise some degree of control over the methods of access to the information stored in a hierarchical structure of this sort, to allow for sub- regularities and exceptions, among other things. The language provides two distinct modes of inheritance; the one we will call the normal mode, the second the complete mode. When using the normal mode to col- lect inheritable information, one starts with the frame in question and runs up the inheritance links in the hierarchy, stopping when the first actual value for the relevant slot is found. The complete mode of inheri- tance simply involves collecting all available values for the relevant slot, beginning with the particular frame and going all the way up to the top of the hierarchy. We illustrated the complete mode above in describing the feature values that a finite verb like works would inherit. To illustrate a use of the normal mode, we note that the VERB class will specify that the CASE of.the ordinary verb's subject is OBJECTIVE, as in Mar~/ wanted him to work. (not Mary wanted he to work.). But the subjects of finite verbs have nomina- tive case, so in the FINITE class frame we stipulate (CASE NOMINATIVE) for the subject. If we used the complete mode of inheritance in determining the case for a finite verb's subject, we would have a contradic- tion, but by using the normal mode, we find the mo~e local (NOMINATIVE) value for CASE first, and stop. In short, when normal mode inheritance is employed, locally declared values override values inherited from higher up the hierarchy. The third and final property of the representation language that is crucial to our characterization of the lexicon is the ability of a frame to inherit information along more than one inheritance path. For example, the lexical frame for the finite verb work# is not only an instance of FINITE (a subclass of VERB), but also an instance of INTRANSITIVE, from which it inherits the information that it requires a subject and nothing else in order to make a complete sentence. This ability to establish multiple inheritance links for a frame proves to be a powerful tool, as we will illustrate further below. 2.2. Inheritance Having presented some of the tools for inheritance, let us now see how and why this mechanism proves useful for representing the information about the lexi- con that is needed to parse sentences of English. ~ We make use of the frame-based representation language to impose a rich hierarchical structure on our lexicon, distributing throughout this structure the information needed to describe the particular lexical items, so that each distinct property which holds for a given class of words need only be stated once. We do this by defin- ing generic lexicai fr-a~nes for grammatical categories at several levels of abstraction, beginning at the top with a generic WORD fr~rne, then dividing and subdividing into ever more specific categories until we hit bottom in frames for actual English words. An example will help clarify the way in which we use this first basic mechanism for sharing structure in representing lexical information. We employ, among others, generic (class) frames for VERB, TRANSITIVE, and AUXILIARY, each con- taining just that information which is the default for its instances. The AUXILIARY frame stores the fact that in generaJ auxiliary verbs have as their complement a verb phrase in base form (e.g. the base VP be a man- ager in un'/[ be a manager). One of the exceptions to this generalization is the auxiliary verb have as in/mve been consultants, where the complement VP must be a past participle rather than in base form. The excep- tion is handled by specifying the past participle in the COMPLEMENT slot for the HAVE frame, then being sure to use the normal mode of inheritance when asking for the syntactic form of a verb's complement. To illustrate the use we make of the complete mode of inheritance, we first note that we follow most current syntactic theories in assuming that a syntactic category is composed (in part) of a set of syntactic features each specified for one or more out of a range of permitted values. So the category to which the auxiliary verb/za8 belongs can be specified (in part) by the following set of feature-value pairs: [(MAJOR V) (TENSE PRES) (AGREEMENT 3RD-SING) (CONTROL SSR) (AUX PLUS)! Now if we have included among our generic frames one for the category of present-tense verbs, and an in- stance of this class for third-person-singular present- tense verbs, then we can distribute the structure given in the list above in the following way. We specify that the generic VERB frame includes in its features (MAJOR V), that the PRESENT-TENSE frame includes (TENSE PRES), that the THIRD-SING frame includes (AGREEMENT 3RD-SING), that the SUBJECT-RAISE frame includes (CONTROL SSR), and the AUXILIARY frame includes (AUX PLUS). Then we can avoid saying anything explicitly about fea- tures in the frame for the auxiliary verb/~ave; we need only make sure it is an instance of the three rather un- related frames THIRD-SING, SUBJECT-RAISE, and AHXILIARY. As long as we use the complete mode 6 The use of inheritance for e.~ciently representing infor- mation about the lexicon is by no means an innovation of ours; see Bobrow and Webber (1980a,b) for a description of an implementation making central use of inheritance. However, we believe that the powerful tools for inheritance (particularly that of multiple inheritance) provided by the • representation language we use have allowed us to give an unusually precise, easily modifiable characterization of the generic lexicon, one which greatly facilitates our continuing efforts to reduce the number of necessary phrase structure rules). 263 of inheritance when asking for the value of the FEA- TURES slot for the HAVE frame, we will collect the five feature-value pairs listed above, by following the three inheritance path links up through the hierarchy, collecting all of the values that we find. 2.3. Lexical rules The second principal mechanism we employ for structure-sharing is one familiar to linguists: the lex- ical redundancy rule 7, which we use to capture both inflectional and derivational regularities among iexical entries. In our current implementation, we have made the lexical rules directional, in each case defining one class as input to the rule, and a related but distinct class as output. By providing with each lexical rule a generic class frame which specifies the gener~ form and predictable properties of the rule's output, we avoid unnecessary work when the lexical rule applies. The particular output frame will thus get its specifications from two sources: idiosyncratic information copied or computed from the particular input frame, and pre- dictable information available via the class/inheritance links. As usual, we depend on an example to make the notions clear; consider the lexical rule which takes ac- tive, transitive verb frames as input, and produces the corresponding passive verb frames. A prose description of this passive lexical rule follows: Passive Lexicai Rule If F0 is a trm~sitive verb frame with spelling XXX, then F1 is the corresponding passive frame, where (I) FI is an instance of the generic PASSIVE class frame (2) FI has as its spelling whatever the past particip|e's spelling is for F0 (XXXED if regular, stipulated if irregular) (3) F1 has as its subject's role the role of F0's object, and assigns the role of F0's subject to F1's optional PP-BY. (4) F1 has OBJECT deleted from its obligatory list. (5) F1 has as its semantics the semantics of FO. It is in the TRANSITIVE frame that we declare the applicability of the passive [exical rule, which po- tentially can apply to each instance (unless explicitly blocked in some frame lower in the lexicon hierarchy, for some particular verb like rc-~emble). By triggering particular lexical rules from selected generic frames, we avoid unnecessary ~ttempts to apply irrelevant rules each time ~ new lexical item is created. The TRANSI- TIVE frarne, then, has roughly the following structure: v See, e.g., Stanley (1967), Jackendoff (1975), Bresnan (1982). (TRANSITIVE (CLASSES (subcyclic)) (OBLIGATORY (object) (subject)) (FEATURES (control trans)) (LEX-RULES (passive-rule)) ) The generic frame of which every output from the passive rule is an instance looks as follows: (PASSIVE (CLASSES (verb)) (FEATURES (predicative plus) (form pas)) (OPTIONAL (pp-by)) ) An example, then, of a verb frame which serves as input to the passive rule is the frame for the transitive verb make, whose entry in the lexicon is given below. Keep in mind that a great deal of inherited information is part of the description for make, but does not need to be mentio,ted in the entry for make below; put dif- ferently, the relative lack of grammatical information appearing in the make entry below is a consequence of our maintaining the strong position that only infor- mation which is idiosyncratic should be included in a lexical entry. (MAKE (CLASSES (main) (base) (transitive)) (SPELLING (make)) (SUBJECT (role (ma~e.er))) (OBJECT (role (make.ed))) (LEX-RULES (past-participle (irreg-spelh ~made")) (past (irreg-spelh ~made"))) ) Upon application of the passive lexlcal rule ~o ~Le make frame, the corresponding passive frame MADE- PASSIVE is produced, looking like this: (MADE-PASSIVE (CLASSES (main)(passive)(transitive)) (SPELLING (made)) (SUBJECT (role (make.ed))) (PP-BY (role (make.er))) ) Note that the MADE-PASSIVE frame is still a main verb and still transitive, but is not connected by any inheritance link to the active make fro, me; the pas- sive frame is not an instance of the active frame. This absence of any actual inheritance link between input and output frames is generally true for all lexical rules, 264 not surprisingly once the inheritance link is understood. As a result, all idiosyncratic information must (loosely speaking) be copied from the input to the output frame, or it will be lost. Implicit in this last remark is the as- sumption that properties of a lexical item which are idiosyncratic should only be stated once by the creator of the lexicon, and then propagated as appropriate by lexical rules operating on the basic frame which was entered by hand. All of our lexical rules, including both inflectional rules, such as the lexical rule which makes plural nouns from singular nouns, and derivational rules, such as the nominalization rule which produces nouns from verbs, share the following properties: each rule specifies the class of frames which are permissible inputs; each rule specifies a generic frame of which every one of the rule's outputs is an instance; each rule copies idiosyncratic information from the input frame while avoiding copy- ing information which can still be inherited; each rule takes as input a single-word lexical frame and produces a single-word lexical frame (no phrases in either case); each rule permits the input frame to stipulate an ir- regular spelling for the corresponding output frame, blocking the regular spelling; and each rule produces an output which cannot be input to the same rule. Most of these properties we believe to be well-motivated, though it may be that, for example, a proper treat- ment of idioms will cause us to weaken the single-word input and output restriction, or we may find a lexical rule which can apply to its own output. The wealth of work in theoretical linguistics on properties of lexi- cal rules should greatly facilitate the fine-tuning of our implementation we extend our coverage. One weakness of the current implementation of lex- ical rules is our failure to represent the [exical rules themselves as frames, thus preventing us from tak- ing advantage of inheritance and other representational tools that we use to good purpose both for the lexical rules and for the phrase structure rules, about which we'll say more below. A final remark about lexical rules involves the role of some of our lexical rules as replacements for metarules in the standard GPSG framework. Those familiar with recent developments in that framework are aware that metarules axe now viewed as necessarily constrained to ouerate only on lexically-headed phrase structure rules, s but once that move has been made, it is then not such a drastic move to attempt the elimio nation of metarules altogether in favor of ]exical rules. ° This is the very road we are on. We maintain that the elimination of metarules is not only a aice move theo- retically, bat also advantageous for implementation. s See Fiickinger (1983) for an initial motivation for such a restriction on metarules. 9 See Pollard (1985) for a more detailed discussion of this important point. 2.4. Nouns from database entities The third mechanism we use for structure-sharing allows us to leave out of the lexicon altogether the vast majority of cow.molt and proper nouns that refer to en- titles in the target database, including in the lexicon only those nouns which have some idiosyncratic prop- erty, such as nouns with irregular plural forms, or mass nouns. This mechanism is simply a procedure much like a lexical rule, but which takes as input the name of some actual database frame, and produces a lexi- cal frame whose spelling slot now contains the name of the database frame, and whose semantics corresponds to the database frame. Such a frame is ordinarily cre- ated when parsing a given sentence in which the word naming the database frame appears, and is then dis- carded once the query is processed. Of course, in or- der for this strategy to work, the database frame must somehow be linked to the word that refers to it, ei- ther by having the frame name be the same as the word, or by having constructed a list of pairings of each database frame with the English spelling for words that refer to that frame. Unlike the other two mechanisms (inheritance and lexical redundancy rules), this pair- ing of database frames with [exical entries tends to be application-specific, since the front end of the system must depend on a particular convention for naming or marking database frames. Yet the underlying intuition is a reasonable one, namely that when the parser meets up with a word it doesn't recognize, it attempts to treat it as the name of something, either a proper noun or a common noun, essentially leaving it up to the database to know whether the name actually refers to anything. As an example, imagine that the frame for Pullum (the consultant, not the prouer noun) is present in the target database, and that we wish to process a query which refers by name to Pullum (such as Does Puilurn have a modernS). [t would not be necessary to have constructed a proper-name frame for Pullum before- hand, given that the database frame is named Pullum. Instead, the mechanism just introduced would note, in analyzing the query, that Pullum was the name of a frame in the target database; it would consequently create the necessary proper-name frame usable by the parser, possibly discarding it later if space were at a premium. Where an application permits this elimina- tion of most common and proper nouns from the lexi- con, one gains not 0nly considerable space savings, but a sharp reduction in the seed for additions to the lexi- con by salve users as tile target database grows. 2.5. On-the-fly fra~nes All three of the mechanisms for structure-sharing that we have discussed here have in common the ad- ditional important property that they can be applied without modification either before ever analyzing a query, or on the fly when trying to handle a partic- ular query. This property is important for us largely 265 because in developing the system we need to be able to make alterations in the structure of the lexicon, so the ability to apply these mechanisms on the fly means that changes to the lexicon have an immediate and pow- erful effect on the behavior of the system. As men- tioned earlier, another significant factor has to do with time/space trade-offs, weighing the cost in memory of storing redundantly specified lexical entries against the cost in time of having to reconstruct these derived lex- ical entries afresh each time. Depending on the par- titular development task, one of the two options for deriving lexical items is preferabte over the other, b~tt both options need to be available. 3. The grammar As we advertised above, the wholesale moving of grammatical information from the phrase structure rules to the lexicon has led to a dramatic reduction in the number of these rules. The rules that remain are usually quite general in nature, and make crucial use of the notion head of a constituent, where the head of a noun phrase is a noun, the head of a verb phrase (and of a sentence~ is a verb. and so on. In each case. it is the head that carries most of the information about what syntactic and semantic properties its sister(s) in the constituent must have. l° For example, the single rule which follows is sufficient to construct the phrase structure for the sentence The consultant works. Grammar-Rule-t X -> Ct II[CONTROL INTRANS] The rule is first used to construct the noun phrase the eor~ultant, taking consultant as the head, and using the information on the lexical frame CONSULTANT- COMMON (which is inherited from the COMMON- NOUN class) that it requires a determiner as its only complement in order to form a complete noun phrase. Then the rule is used again with the lexical frame WORK-THIRD-SING taken as the head, ~.nd using the information that it requi~es a nominative singular noun phrase (which was just constructed) as its only obliga- tory complement in order to make a complete sentence. Another example will also allow ,as to illustrate how information once thought to be clearly the responsi- bility of phrase structure rules is in fact more simply represented as lexical information, once one has the power of a highly structured lexicon with inheritance available. A second rule in the grammar is provided to admit an optional constituent after an intransitive head, such as wor~ on Tuesdays or modem [or Pullura: [ntransitive-Adj unct-Rule X -> H[CONTROL INTRANS] ADJUNCT 10 See Pollard (1984) for a thorough discussion of head grammars. This works quite well for prepositional phrases, but is by no means restricted to them. Eventually we no- ticed that another standard rule of English grammar could be eliminated given the existence of this rule; namely, the rule which admits relative clauses as in man who work~ for the sentence Smith hired the man who works: Relative-Clause-Rule X -> II[MAJOR N] S[REL] It should soon be clear that if we add a single piece of information to the generic COMMON-NOUN class frame, we can eliminate this rule. All that is necessary is to specify that a permissible adjunct for common nouns is a relative clause (leaving aside the semantics, which is quite tractable). By stating this fact on the COMMON-NOUN frame, every lexical common noun will be ready to accept a relative clause in just the right place using the Intransitive-Adjunct-Rule. In fact, it seems we can use the same strategy to eliminate any other specialized phrase structure rules for admitting post-nominal modifiers (such as so-called reduced rel- ative clauses as in The people working for Smith are coasultants). This example suggests one direction of research we axe pursuing: to reduce the number of rules in the grammar to an absolute minimum. At present it still seems to be the case that some small number of phrase structure rules will always be necessary; for example, we seem to be unable to escape a PS rule which ad- mits plural nouns as full noun phrases without a de- terminer, as in Consultants work (but not *Consultant work). Relevant issues we will leave unaddressed here involve the role of the PS rules in specifying linear or- der of constituents, whether the linking rules of GPSG (which we still employ) could ever be pushed into the lexicon, and whether in fact both order and linking rules ought to be pushed instead into the parser. 4. Conclusion Having sketched the mechanisms employed in re- ducing redundant specification in the lexicon for the HPSG system, and having indicated the brevity of the grammar which results from our rich lexicon, we now summarize the advantages we see in representing the lexicon as we do, apart from the obvious advantage of a much smaller grammar. These advantages have to do in large part with the rigors of developing a large nat- ural language system, but correspond at several points to concerns in theoretical linguistics as well. First axe a set of advantages that derive from being able to make a single substitution or addition which will effect a desired change throughout the system. This ability obviously eases the task of development based on experimentation, since one can quickly try several minor variations of, say, feature combinations and accu- 266 rarely judge the result on the overall system. Of equal importance to development is the consistency provided, given that one can make a modification to, say, the features for plural nouns, and be sure that all regu- lar nouns will reflect the change consistently. Third, we can handle many additions to the lexicon by users without requiring expertise of the user in getting all the particular details of a lexical entry right, for an impor- tant (though far from complete) range of cases. Note that this ability to handle innovations seems to have a close parallel in people's ability to predict regular in- flected forms for a word never before encountered. A second advantage that comes largely for free given the inheritance mechanisms we employ involves the phenomenon referred to as blocking II, where the existence of an irregular form of a word precludes the application of a lexical rule which would otherwise pro- duce the corresponding regular form. By allowing in- dividual lexical entries to turn off the relevant lexical rules based on the presence in the ,frame of an irreg- ular form, we avoid producing, say, the regular past tense form =maked, since as we saw, the entry for make warns explicitly of an irregular spelling for the past tense form. Already mentioned above was a third advantage of using the mechanisms we do, namely that we can use inheritance to help us specify quite precisely the domain of a particular lexical rule, rather than having to try every lexical rule on every new frame only to discover that in most cases the rule fails to apply. Finally, we derive an intriguing benefit from hav- ing the ability to create on-the-fly noun frames for any- database entry, and from our decision to store our lex- ical items using the same representation language that is used for the target database: we are able without ad- ditional effort to answer queries about the make-up of the natural language system itself. That is, we can get an accurate answer to a question like How many verbs are there? in exactly the way that we answer the ques- tion IIom many managers are there ?. This ability of our system to reflect upon its own structure may prove to be much more than a curiosity as the system continues to grow; it may well become an essential tool for the continued development of the system itself. The poten- tial for usefulness of this reflective property is enhanced by the fact that we now also represent our grammar and several other data structures for the system in this same frame representation language, and may progress to representing in frames other intermediate stages of the processing of a sentence. This enhanced ability to extend the lexicai coverage of our system frees us to in- vest more effort in meeting the many other challenges of developing a practical, extensible implementation of a natural language system embedded in a aerious lin- guistic theory. REFERENCES Aronoff, M. (1976) Word Formation in Generative Grammar. Linguistic Inquiry Monograph i. Cam- bridge, Mass.: The MIT Press. Bobrow, R. and B. Webber (1980a) ~Psi-Klone" in Proceedings of the 1980 CSCSI/SCEIO AI Con- /create, Victoria, B.C. Bobrow, R. and B. Webber (1980b) ~Knowledge Rep- resentation for Syntactic/ Semantic Processing," in Proceedinos of the First Annual National Confer- ence on Artificial Intelligence, Stanford University. Bresnan, J., ed. (1982} The Mental Representation of Grammatical Relations. Cambridge, Mass.: The MIT Press. Flickinger, Daniel P. (1983) ~Lexical Heads and Phrasal Gaps," in Flickinger, et al., Proceedinqs of the West Coast Conference on Formal Linguistics, vol. 2. Stanford University Linguistics Dept. Gawron, J. et al. (1982) ~Processing English with a Generalized Phrase Structure Grammar", A CL Proceedings 20. Gazdar, G., E. Klein, G. K. Pullum, and I. A. Sag (1985) Generalized Phrase Structure Grammar. Cambridge, Mass.: Harvard University Press. Jackendoff, R. (1975) ~Morphological and Semantic Regularities in the Lexicon", Language 51, no. 3. Pollard, C. (1984) Generalized Phrase Structure Gram- mars, Head Grammars, and Natural Language. Doctoral dissertation, Stanford University. Pollard, C. (1985) ~Phrase Structure Grammar With- out Metarules," presented at the 1985 meeting of the West Coast Conference on Formal Linguistics, Los Angeles. Rosenberg, S. (1983) ~HPRL: A Language for Building Expert Systems", [JCAI83: 215-217. Stanley, R. (1967) "Redundancy Rules in Phonology', Language 43, no. I. tx See Aronoff (1976) for discussion. 287
1985
32
A TOOL KIT FOR LEXICON BUILDING Thomas E. Ahlswede Computer Science Department Illinois Institute of Technology Chicago, Illinois 6e616, USA ABSTRACT This paper describes a set of interactive routines that can be used to create, maintain, and update a computer lexicon. The routines are available to the user as a set of commands resembling a simple operating system. The lexicon pro- duced by this system is based on lexi- cal-semantic relations, but is compatible with a variety of other models of lexicon structure. The lexicon builder is suit- able for the generation of moderate-sized vocabularies and has been used to construct a lexicon for a small medical expert system. A future version of the lexicon builder will create a much larger lexicon by parsing definitions from machine-readable dictionaries. INTRODUCTION Natural language processing systems need much larger lexicons than those available today. Furthermore, a good com- puter lexicon with semantic as well as syntactic information is elaborate and hard to construct. We have created a program which enables its user to inter- actively build and extend a lexicon. The program sets up a user environment similar to a simple interactive operating system; in this environment lexical entries can be produced through a small set of commands, combined with prompts specified by the user for the desired kind of lexicon. The interactive lexicon builder is being used to help construct entries for a lexicon to be used to parse and generate stroke case reports. Many terms in this medical sublanguage either do not appear in standard dictionaries or are used in the sublanguage with special meanings. The design of the lexicon builder is inuended to be general enough to make it useful for others building lexicons for large natural language processing systems involving different sublanguages. The interactive lexicon builder will be the basis for a fully automatic lexicon builder which uses Sager's Linguistic String Parser (LSP) to parse machine- readable text into a relational network based on a modified version of Werner's NTQ (Modification-Taxonomy-Queueing) schema. Initially this program will be applied to Webster's Seventh Collegiate Dictionary and the Longman Dictionary of Contemporary English, both of which are available in machine-readable form. LEXICAL-SENANTIC RELATIONS The semantic component of the lexicon produced by this system consists princi- pally of a network of lexical-semantic relations. That is, the meaning of a word in the lexicon is indicated as far as possible by its relationships with other words. These relations often have seman- tic content themselves and thus contribute to the definition of the words they link. The two most familiar such relations are synonymy and antonymy, but others are interesting and important. For instance, to take an example from the vocabulary of stroke reports, the carotid is a kind of artery and an artery is a kind of blood Vessel. This "is a kind of" relation is taxonomy. We express the taxonomic rela- tions of "carotid', "artery" and "blood vessel" with the relational arcs carotid T artery artery T blood vessel Another important relation is that of the part to the whole: ventricle PART heart Broca's area PART brain Note that taxonomy is transitive: if the carotid is an artery and an artery is a blood vessel, then the carotid is a blood vessel. The presence or absence of the properties of transitivity, reflexiv- ity and symmetry are important in using relations to make inferences. 268 The part-whole relation is more complicated than taxonomy in its proper- ties; some instances of it are transitive and others are not. From this and other criteria, Iris et al. (forthcoming) distinguish four different part-whole relations. Taxonomy and part-whole are very common relations, by no means restricted to any particular sublanguage. Sublan- guages may, however, use relations that are rare or nonexistent in the general language. In the stroke vocabulary, there are many words for pathological conditions involving the failure of some physical or mental function. We have invented a rela- tion NNABLE to express the connection between the condition and the function: aphasia NNABLE speech amnesia NNABLE memory Relations such as T, PART, and NNABLE are especially useful in making infer- ences. For instance, if we have another relation FUNC, describing the typical function of a body part, we might combine the relational arc speech FUNC Broca's area with the arc aphasia NNABLE speech to infer that when aphasia is present, the diagnostician should check for the possl- bility of damage to Broca's area (as well as to any other body part which has speech as a function). Figure i. Part of a relational network Another kind of relation is the "col- locational relation', which governs the combining of words. These are particu- larly useful for generating idiomatic text. Consider the "typical preposition" relation PREP: on PREP list which says that an item may be "on a list" as opposed to "in a list" or "at a list." Although the lexicon builder is based on a relational model, it can be adapted for use in connection with a variety of models of lexicon structure. A semantic- field approach can be handled by the same mechanism as relations; the lexicon builder also recognizes unary attributes of words, and these attributes can be treated as semantic features if one wishes to build a feature-based lexicon. APPLICATIONS FOR THE LEXICON BUILDER This project was motivated partly by theoretical questions of lexicon design and partly by projects which required the use of a lexicon. For instance, the Michael Reese Hos- pital Stroke Registry includes a text generation module powered by a relational lexicon (Evens et al., 1984). This appli- cation provided a framework of goals within which the interactive lexicon builder was developed. The vocabulary required for the Stroke Registry text generator is of moderate size, about 2000 words and phrases. This is small enough thau a lexicon for it can be built interactively. One can imagine many applications for a large lexicon such as the automatic lexicon builder will construct. Question answering is one of our original areas of interest; a large, densely connected vocabulary will greatly add to the variety of inferences a question answering system can make. Another area is information re- trieval, where experiments (Evens et al., forthcoming) have shown that the use of a relational thesaurus leads to improvements in both recall and precision. On a more theoretical level, the automatic lexicon builder will add greatly to our understanding of sublanguages, notably that of the dictionary itself. We have noted that a specialized relation such as NNABLE, unusual in the general language, may be important in a sub- language. We believe that such specific relations are part of the distinctive character of every sublanguage. The very possibility of creating a large, general- 269 language lexicon points toward a time when sublanguages will be obsolete for many of the purposes for which they are now used; but they will still be useful and interesting for a long time to come, and the automatic lexicon builder gives us a new tool for analyzing them. THE INTERACTIVE LEXICON BUILDER Commands The interactive lexicon builder consists of an operatlng-system-like environment in which the user may invoke the following commands: HELP displays a set of one-line summaries of the commands, or a paragraph- length description of a specified command. This paragraph describes the command-line arguments, optional or required, for the given command, and briefly explains the function of the command. ADDENTRY provides a series of prompts to enable the user to create a lexical entry. Some of these prompts are hard coded; others can be set up in advance by the user so that the lexicon can be tailored to the user's needs. EDIT enables the user to modify an existing entry. It displays the existing contents of the entry item by item, prompting for changes or additions. If the desired entry is not already in the lexicon, EDIT behaves in the same way as ADDENTRY. DELETE lets the user delete one or more entries. An entry is not physically deleted; it is removed from the direc- tory, and all entries with arcs pointing to it are modified to eliminate those arcs. (This is simple to do, since for every such arc there is an inverse arc pointing to that entry from the deleted one.) On the next PACK operation (see below) the deleted entry will not be preserved in the lexicon. This command can also be used to delete the defective entries that are occasionally caused by unresolved bugs in the entry-creating routines, or which might arise from other circumstances. A special option with this command searches the directory for a variety of "illegal" conditions such as nonprinting characters, zero-length names, etc. LIST gives one-line listings of some or all of the entries in the lexicon. The listing for each entry includes the name (the word itself), sense number, part of speech, and the first forty characters of the definition if there is one. SHOW displays the full contents of one or more entries. RELATIONS displays a table of the lexical-semantic relations used by the lexicon builder. This table is created by the user in a separate operation. UNDEF is a special form of EDIT. In creating an entry, the user may create relational arcs from the current word to other words that are not in the lexicon. The system keeps a queue of undefined words. UNDEF invokes EDIT for the word at the head of the queue, thus saving the user the trouble of looking up undefined words. PACK performs file management on the lexicon, sorting the entries and elimi- nating space left by deleted ones. This routine works in two passes. In the first pass, the entries are copied from the existing lexicon file to a new file in lexicographic order and a table is created that maps the entries from their old locations to their new ones. At this stage, a relational arc from one entry to another still points to the other entry's old location. The second pass updates the new lexicon, modifying all relational arcs to point to the correct new locations. QUIT exits from the lexicon builder environment. Any new entries or changes made during the lexicon building session are incorporated and the directory is updated. Extensions to the commands All of the commands can be abbrevi- ated; so far they all have distinctive initials and can thus be called with a single keystroke. Each command may be accompanied by command-line arguments to define its ac- tion more precisely. Display commands, such as HELP or SHOW, allow the user to get a printout of the display. Where an entry name is to be specified, the user can get more than one entry by means of "wild cards." For instance, the command "LIST produc= might yield a list showing entries for "produce', "produced", "pro- duces", "producing', "product', and "production. ~ Additional commands are currently being developed to help the user manage the relation table and the attribute list from within the lexicon builder environment. 270 The design of the user interface took into account both the available facilities and the expected users. The lexicon builder runs on a VAX 11-75B, normally accessed with line-edlting terminals. This suggests that a single-line command format is most appropriate. Since much of the work with the system is done over 3~0 baud telephone lines, conciseness is also important. The users have all had some programming experience (though not neces- sarily very much) so an operating-system- like interface is easy for them to get used to. If the lexicon builder becomes popular, we hope to have the opportunity to develop a more sophisticated interface, perhaps with a combination of features for beginners and more experienced users. Structure of a lexlcal entry A complete lexical entry consists of: i. The "name" of the entry -- its character-string form. 2. Its sense. We represent senses by simple numbers, not attempting to formally distinguish polysemy and homo- nymy, or any other degree of semantic difference. The system leaves to the user the problem of distinguishing different senses from extensions of a single sense: that is, where a word has already been entered in some sense, the user must decide whether to modify the entry for that sense or create a new entry for a new sense. 3. Part of speech, or "class." Our classification of parts of speech is basically the traditional classification with some convenient additions, largely drawn from the classification used by Sager in the LSP (Sager, 1981). Most of the additions are to the category of verbs: "verb" to the lexicon builder de- notes the stem form, while the third person and past tense are distinguished as "finite verb', and the past and present participles are classified separately. 4. The text of the definition, entered by the user. At this stage in our work, the definition is not parsed or otherwise ana- lyzed, so its presence is more for purposes of documentation than anything else. In future versions of the lexicon builder, the definition will play an important role in constructing the entry but in the entry itself will be replaced by information derived from its analysis. 5. A list of attributes (or semantic features), each with its value, which may be binary or scalar. 6. A predicate calculus definition. For example, for the most common sense of the verb "promise', the predicate calculus definition is expressed as promiseix,y,z) = say(x,w,z) _eventiy) => w = will happen(y) _thing(y) => w = will receive(z,y) or, in freer form, ix promises y to z} = ix says w to z) where w = (y will happen) if y is an event (z will receive y) if y is a physical object. This is entered by the user. We have been inclined to think of the relational lexicon as a network, since the network representation vividly brings out the interconnected quality which the relational model gives to the lexicon. Predicate calculus is better in other respects; for instance, it expresses the above definition of "promise" much more elegantly than any network notation could. The two methods of representation have traditionally been seen as alternatives rather than as supplementing each other; we believe that predicate calculus has an important supplementary role to play in defining the core vocabulary of the lexicon, although we are not sure yet how to use it. 7. Case structure (for verbs). This is a table describing, for each syntactic slot associated with the verb (subject, direct object, etc.) the semantic case or cases that may be used in that slot ('age,in, "experiencer', etc.), whether it is required, optional, or may be expressed elliptically (as with the direct and indirect object in "I promisei" referring to an earlier statement). Space is reserved in this structure for selection restrictions. A relational model gives us the much more powerful op- tion of indicating through relations such as "permissible subject', "permissible object', etc., not only what words may go with what others, but whether the usage is literal, a conventional figure of speech, fanciful, or whatever. Selection restric- tions do, however, have the virtue of conciseness, and they permit us to make generalizations. Relational arcs may then be used to mark exceptions. 8. A list of zero or more relations, each with one or more pointers to other entries, to which the current entry is connected by that relation. 271 We find it convenient to treat mor- phological derivations such as plural of nouns, tenses and participles of verbs, as relations connecting separate entries. The entry for a regularly derived form such as a noun plural is a minimal one, consisting of name, sense, part of speech, and one relational arc, linking the entry to the stem form. The lexicon builder generates these regular forms automati- cally. It also distinguishes these "regu- lar" entries from "undefined" entries, which have been entered indirectly as target words of relational arcs and which are on the queue accessed by UNDEF, as well as from "defined" entries. name sense class text of definition attribute list predicate calculus definition case structure table relation~ list w2- I w2 1.2[ l :I Figure 2, Structure of a lexical entry File structure of the lexicon There are four data files wi~h the lexicon. associated The first is the lexicon proper. The biggest complicating factor in the design of the lexicon is the extremely inter- connected nature of the data; a change in one portion of the file may necessitate changes in many other places in the file. Each entry is linked through relational arcs to many other entries; and for every arc pointing from wordl to word2, there must be an inverse arc from word2 to wordl. This means that whenever we create a new arc in the course of building or modifying an entry for wordl, we must update the entry for word2 so that it will contain the appropriate inverse arc back to wordl• Word2~s entry has to be updated or created from scratch; we need to structure the lexicon file so that this updatin9 process, which may take place anywhere in the file, can be done with the least possible dislocation. aphasia (1) n. definition a disorder of language due to injury to the brain attributes nonhuman collective predicate calculus have(x, aphasia) -- "able(speak(x)) relations TAX [aphasia is a kind of x] deficit disorder loss inability "TAX Ix is a kind of aphasia] anomic global gerstmann ' s semantic We rnicke ' s Sroca ' s conduction transcortical SYMPTOM [aphasia is a symptom of x] stroke TIA ASSOC [aphasia may be associated with x] apraxia _CAUSE [x is a cause of aphasia] injury lesion NNABLE [aphasia is the inability to do x] speech language Figure 3. Lexical entry for "aphasia" The size of an entry can vary enormously. Regular derived forms contain only the name, sense, class and one rela- tional arc (to the stem form), as well as a certain amount of overhead for the definition, predicate calculus definition and attribute list although these are not used. The smallest possible entry takes up about thirty bytes. At the other extreme, a word may have an extensive attribute list, elaborate text and predicate calculus definitions, and dozens 272 or even (eventually) hundreds of rela- tional arcs. "Aphasia', a moderately large entry with 19 arcs, occupies 322 bytes. Like all entries in the current lexicon, it will be subject to updating and will certainly become much larger. With this range of entry sizes, the choice between fixed-size and variable- size records becomes somewhat painful. Variable-size records would be highly convenient as well as efficient except for the fact that when we add a new entry that is related to existing entries, we must add new arcs to those entries. The existing entries thus no longer fit into their previous space and must be either broken up or moved to a new space. The former option creates problems of identifying the various pieces of the entry; the latter requires that yet more existing entries be modified. Because of these problems, we have opted for a fixed-size record. Some space is wasted, either in empty space if the record is too large or through prolifera- tion of pointers if the record is too small; but the amount of necessary up- dating is much less, and the file can be kept in order through frequent use of the PACK command. The choice of record size is conditioned by many factors, system requirements as well as the range of entry sizes. We are currently working on deter- mining the best record size for the MRH application. So far the user does not have the op- tion of saving or rejecting the results of a lexicon building session, since entries are written to the file as soon as they are created. We are studying ways of providing this option. A brute force way would be to keep the entire lexicon in memory and rewrite it at the end of the session. This is feasible if the host computer is large and the lexicon is small. The 2~g0-word lexicon for the Michael Reese stroke database takes up about a third of a megabyte, so this approach would work on a mainframe or a large minicomputer such as our Vax 75g, but could not readily be ported to a smaller machine; nor could we handle a much larger vocabulary such as we plan to create with the automatic lexicon builder. The second file is a directory, showing each entry's name, sense, and status (defined, undefined or regular derivauive), with a pointer to the appro- priate entry in the lexicon proper. The directory entries are linked in lexico- graphic order. When the lexicon builder is invoked, the entire directory is read into a buffer in memory, and this buffer is update~ as entries are created, modified or deleted. At the end of a lexicon building session, the updated directory is written out to disk. The third (optional) file is a table of attributes, with pointers into the lexicon proper. This can be extended into a feature matrix. The fourth (also optional) is a table of pre-defined relations. This table includes, for each relation: (i) its mnemonic name. (2) its properties. A relation may be reflexive, symmetric or transitive; there may be other properties worth including. (3) a pointer to the relation's inverse. If x REL y, then we can define some REL such that y REL x. If REL is reflexive or symmetric, then REL = REL. (4) the appropriate parts of speech for the words linked by the relation. For instance, the NNABLE relation links two nouns, while the collocational PREP rela- tion links a preposition to a noun. Taxonomy can link any two words (apart from prepositions, conjunctions, etc.) as long as they are of the same part of speech: nouns to nouns, verbs to verbs, etc. (5) the text of a prompt. ADDENTRY uses this prompt when querying the user for the occurrence of relational arcs involving this relation. For instance, if we are entering the word "promise" and our application uses the taxonomy relation, we might choose a short prompt, in which case the query for taxonomy might take the form "promise" T: [user enters word2 here] or we could use something more explicit: "promise" is a kind of: Users familiar with lexical-semantic relations might prefer the shorter mnemonic prompt, whereas other users might prefer a prompt that better expressed the significance of the relation. THE AUTOMATIC LEXICON BUILDER Building a very large lexicon There are numerous logistical prob- lems in implementing the sort of very 73 large lexicon that would result from anal- ysis of an entire dictionary, as the work of Amsler and White (1979) or Kelly and Stone (1975) shows. Integrating the lexicon builder with the LSP, and writing preprocessors for dictionary data, will also be big jobs. Fully automatic analy- sis of dictionary material, then, is a long-range goal. A major problem in the relational analysis of the dictionary is that of determining what relations to use. Noun and verb definitions rely on taxonomh ~ to a great extent (e.g. Amsler and White, 1979) but there are definitions that do not clearly fit this pattern; further- more, even in a taxonomic definition, much semantic information is contained in the qualifying or differentiating part of the definition. Adjective definitions are another problem area. Adjectives are usually defined in terms of nouns or verbs rather than other adjectives, so simple taxonomy does not work neatly. In a sample of about 7,0~ definitions from W7, we identified nineteen major relations unique to adjective definitions, and these covered only half of the sample. The remaining definitions were much more varied and would probably require far more then nineteen additional relations. And for each relation, we had to identify words or phrases (the "defining formulas') that signaled the presence of the relation. The M'~ model For these reasons as well as theoretical ones, we need a simplifying model of relations, a model that enables us either to avoid the endless identifica- tion of new relations or to conduct the identification within an orderly frame- work. Werner's MTQ schema (Werner, 1978; Werner and Topper, 1976) seems to provide the basis for such a model. Werner idennifies only three rela- tions: modification, taxonomy and queue- ing. He asserts that all other relations can be expressed as compounds of these relations and of lexical items -- for instance, the PART relation can be expressed, with the help of the lexical item "part', by the relational arcs Broca's area T part brain M part which say in effect that Broca's area is a kind of part, specifically a "brain-part." werner's concept of modification and taxonomy reflects Aristotle's model of the definition as consisting of species, genus and differentiae -- taxonomy links the species to the genus and modification links the differentiae to the genus. A study of definitions in W7 and LDOCE shows that they do indeed follow this pattern, although (as in adjective definitions) the pattern is not always obvious. The special power of MTQ in the analysis of definitions is that in a definition following the Aristotelian structure, taxonomy and modification can be identified by purely syntactic means. One (or occasionally more than one) word in the definition is modified directly or indirectly by all the other words. The core word is linked to the defined word by taxonomy; all the others are linked to the core word by modification. (Queueing so far does not seem to be important in the analysis of definitions.) In order to avoid certain ambiguities that arise in a very elaborate network such as that generated from a large dic- tionary, we have replaced the separate modification and taxonomy arcs with a single, ternary relational arc that keeps the species, genus and differentiating items of any particular definition linked to each other. The problem of identifying "higher level" relations such as PART and NNABLE in an MT0 network still remains. At this point it seems to be similar to the prob- lem of identifying higher level relations from defining formulas. Another pleasant discovery is that the Linguistic String Parser, which we have used successfully for some years, is exceptionally well suited for this strat- egy, since it is geared toward an analysis of sentences and phrases in terms of "centers" or "cores" with their modifying "adjuncts', which is exactly the kind of analysis we need to do. Design of the automatic lexicon builder The automatic lexicon builder will contain at least the following suDsystems: I. The standard data structure for the lexical entry, as described for the interactive lexicon builder, with slight changes to adjust to the use of MTQ. The relation list is presently structured as a linked list of relations, each pointing to a linked list of wordis. ('Wordi" refers to any word related to the 274 word (=wordl') we are currently investi- gating.) Incorporating the ternary MTQ model, we would have two relation lists: a T list and an M list. The T list would be a linked list of words connected to wordl by the T relation; its structure would be identical to the present relation list except that its nodes would be lexical entry pointers instead of rela- tions. Each of these lexical entry point- ers would, like the relation nodes in the existing implementation, point to a linked list of word2s. The word2s in the T list would be connected to the T words by an inverse-modification relation ('M) and the word2s in the M list would be connected to the M words by inverse taxonomy ('T). 2. Preprocessors to convert pre- existing data to the standard form. The preprocessor need not be intelligent; its job is to identify and decode part-of- speech and other such information, sepa- rating this from the definition proper. Part of the preprocessing phase is to generate a "dictionary" for the LSP. This dictionary need only contain part- of-speech information for all the words that will be used in definitions; other information such as part- of-speech subclass and selection restrictions is helpful but not necessary. Sager and her associates (198B) have created programs to do this. 3. Batch and interactive input modules. The batch input reads a data file in standard form, perhaps optionally noting where further information would be especially desirable. The interactive input is preserved from the interactive version of the system and allows the user to "improve" on dictionary data as well as to observe the results of the dictionary parse. 4. Definition analyzer. In this module, the LSP will parse the definition to produce a parse tree. which will then be converted into an MTQ network to be linked into the overall lexical network. 5. Entry generator. This module, like the preprocessor, can be tailored to the user's needs. SU~X A program has been written that enables a user interested in creating a lexicon for natural language processing to generate lexical entries interactively and link them automatically to other lexical entries through lexical-semantic rela- tions. The program provides a small set of commands that allow the user to create, modify, delete, and display lexical entries, among other operations. The immediate motivation for the program was to produce a relational lexicon for text generation of clinical reports by a diagnostic expert system. It is now being used for that purpose. It can equally well be used in any other sub- language environment; in addition, it is intended to be compatible, as far as possible, with models of lexicon structure other than the relational model on which it is based. The interactive lexicon builder is further intended as the starting point for a fully automatic lexicon building program which will create a large, general purpose relational lexicon from machine readable dictionary text, using a slightly modified form of Werner's Modification-Taxonomy- Queueing relational model. REFERENCES Ahlswede, Thomas E., and Evens, Martha W., 1983. "Generating a Relational Lexicon from a Machine-Readable Dictionary." Proceedings of the Conference on Artificial Intelligence, Oakland Univer- sity, Rochester, Michigan. Ahlswede, Thomas E., and Evens, Martha W., 1984. "A Lexicon for a Medical Expert System." Presented at the Workshop on Relational Models, Coling '84, Stanford University, Palo Alto, California. Ahlswede, Thomas E., in press. =A Lin- guistic String Grammar of Adjective Definitions." In S. Williams, ed. Humans and Machines: The Interface Through Language, Ablex. Amsler, Robert A., and White, John S., 1979. Development of a Computational Methodology for Deriving Natural Language Semantic Structures via Analysis of Machine Readable Dictionaries. Linguis- tics Research Center, University of Texas. Evens, Martha W., Ahlswede, Thomas E., Hill, Howard, and Li, Ping-Yang, 1984. "Generating Case Reports from the Michael Reese Stroke Database." Proc. 1984 Conference on Intelligent Systems and Machines, Oakland University, Rochester, Michigan, April. Evens, Martha W., Vandendorpe, James, and Wang, Yih-Chen, in press. "Lexical- Semantic Relations in Information Retriev- al," In S. Williams, ed. Humans and Machines: The Interface Through Language, Ablex. 275 Iris, Madelyn, Litowitz, Bonnie, and Evens, Martha W., unpublished. "The Part-Whole Relation in the Lexicon: an Investigation of Semantic Primitives." Kelly, Edward F., and Stone. Philip J., 1975. Computer Recognition of English Word Senses. North-Holland, Amsterdam. Sager, Naomi, 1981. Information Processing. New York. Natural Language Addison-Wesley, Sager Naomi, Hirschman, Lynette, White, Carolyn, Foster, Carol, Wolff, Susanne, Grad, Robert, and Fitzpatrick, Eileen, 198~. Research into Methods for Automatic Classification and Fact Retrieval in Science Subfields. String Reports No. 13, New York University. Werne~, Oswald, 1978. "The Synthetic Informant Model: the Simulation of Large Lexical/Semantic Fields." In M. Loflin and J. Silverberg, eds., Discourse and Difference in Cognitive Anthropology. Mouton, The Hague. Warner, Oswald, and Topper, Martin D., 1976. "On the Theoretical Unity of Ethnoscience Lexicography and Ethnoscience Ethnographies." In C. Rameh, ed., Seman- tics, Theory and Application, Proc. Georgetown University Round Table on Language and Linguistics. 276
1985
33
USING AN ON=LINE DICTIONARY TO FIND RHYMING WORDS AND PRONUNCIATIONS FOR UNKNOWN WORDS Roy J Byrd I.B.M. Thomas J. Watson Research Center Yorktown Heights, New York 10598 Martin S. Chodorow Department of Psychology, Hunter College of CUNY and I.B.M. Thomas J. Watson Research Center Yorktown Heights, New York 10598 ABSTRACT Humans know a great deal about relationships among words. This paper discusses relationships among word pronunciations. We describe a computer system which models human judgement of rhyme by assigning specific roles to the location of primary stress, the similarity of phonetic segments, and other factors. By using the model as an experimental tool, we expect to improve our understanding of rhyme. A related computer model will attempt to generate pronunciations for unknown words by analogy with those for known words. The analogical processes involve techniques for segmenting and matching word spellings, and for mapping spelling to sound in known words. As in the case of rhyme, the computer model will be an important tool for improving our understanding of these processes. Both models serve as the basis for functions in the WordSmith automated dictionary system. 1. Introduction This paper describes work undertaken in the develop= merit of WordSmith, an automated dictionary system being built by the Lexical Systems group at the IBM T. J. Watson Research Center. WordSmith allows the user to explore a multidimensional space of information about words. The system permits interaction with lexi- cal databases through a set of programs that carry out functions such as displaying formatted entries from a standard dictionary and generating pronunciations for a word not found in the dictionary. WordSmith also shows the user words that are "close" to a given word along dimensions such as spelling (as in published dic- tionaries), meaning (as in thesauruses), and sound (as in rhyming dictionaries). Figure I shows a sample of the WordSmith user inter- face. The current word, urgency, labels the text box at the center of the screen. The box contains the output of the PRONUNC application applied to the current word: it shows the pronunciation of urgency and the mapping between the word's spelling and pronunciation. PRONUNC represents pronunciations in an alphabet derived from Webster's Seventh Collegiate Dictionary. In the pronunciation shown "*" represents the vowel schwa, and ">" marks the vowel in the syllable bearing primary stress. Spelling-to-pronunciation mappings will be described in Section 3. Three dimensions, displaying words that are neighbors of urgency, pass through the text box. Dimension one, extending from uriede to urinomerric, contains words from the PRONUNC data base which are close to ur- gency in alphabetical order. The second dimension (from somebody to company) shows words which are likely to rhyme with urgency. Dimension three (from 9udency to pruriency) is based on a reverse alphabetical ordering of words, and displays words whose spellings end similarly to urgency. The RHYME and REVERSE dimensions are discussed below. 277 ureide uremia uremic ureter ureteral ureteric urethan urethane urethra urethrae urethral urethritis urethroscope urethroscopic urge somebody perfidy subsidy burgundy hypertrophy courtesy discourtesy reluctancy decumbency recumbency incumbency redundancy fervency conservancy pungency pudency agency subagency regency exigency plangency tangency stringency astringency contingency pungency cogency emergency detergency convergency l-urgency ........................................ I I N: >*R-J*N-SE3 I I u:>* r:R g:d e:* n:N c:S y:E3 I i I urgent uric uricosuric uridine uriel urim and thumm urinal urinalysis urinary urinate urination urine urinogenital urinometer urinometric detergency surgeoncy insurgency convergency emergency indeterminacy pertinency impertinency repugnancy permanency impermanency currency trustworthy twopenny company insurgency deficiency efficiency inefficiency sufficiency insufficiency proficiency expediency inexpediency resiliency leniency conveniency inconvenienc incipiency pruriency APPLICATION: PRONUNC COMMAND: DIM1: PRONUNC DIM2: RHYME DIM3: REVERSE DIM4: Figure 1. WordSmith User Interface. Section 2 describes the construction of the WordSmith rhyming dimension, which is based on an encoding pro- cedure for representing pronunciations. The encoding procedure is quite flexible, and we believe it can be used as a research tool to investigate the linguistic and psycholinguistic structure of syllables and words. Sec- tion 3 outlines a program for generating a pronunciation of an unknown word based on pronunciations of known words. There is evidence (Rosson. 1985) that readers sometimes generate a pronunciation for an unfamiliar letter string based on analogy to stored lexical "neigh- bors" of the string, i.e. actual words that differ only slightly in spelling from the unfamiliar string. A program which generates pronunciations by analogy might serve as a supplement to programs that use spelling-to-sound rules in applications such as speech synthesis (Thomas, et aL, 1984), or it might be used to find rhyming words, in WordSmith's rhyming dimension, for an unknown word. Z. Rhyme The WordSmith rhyme dimension is based on two files. The first is a main file keyed on the spelling of words arranged in alphabetical order and containing the words' pronunciations organized according to part of speech. This same file serves as the data base for the PRONUNC application and dimension shown in Figure 1. The second file is an index to the first. It is keyed on encoded pronunciations and contains pointers to words in the main file that have the indicated pronunciations. If a single pronunciation corresponds to multiple spellings in the main file, then there will be multiple pointers, one for each spelling. Thus. tiffs index file also serves as a list of homophones. The order of the en- coded pronunciations in the index file defines the rhym- ing dimension so that words which are close to one another in tiffs file are more likely to rhyme than words which are far apart. The original motivation for the encoding used to obtain the rhyme dimension comes from published reverse dic- tionaries, some of which (e.g., Walker, 1924) even call themselves "rhyming dictionaries". Such reverse dic- tionaries are obtained from a word list by (a) writing the words right-to-left, instead of left-to-right, (b) doing a normal alphabetic sort on the reversed spellings, and (c) restoring the original left-to-right orientation of the words in the resulting sorted list. This procedure was used to derive the REVERSE dimension shown in Fig- ure I. There are several problems with using reverse diction= aries as the basis for determining rhymes. First, since English spelling allows multiple ways of writing the same sounds, words that in fact do rhyme may be located far apart in the dictionary. Second, since English allows a given spelling to be pronounced in multiple ways, words that are close to one another in the dictionary will not necessarily rhyme with each other. Third, the location of primary stress is a crucial factor in determining if two words rhyme (Rickert, 1978). Primary stress is not en- coded in the spelling of words. As an extreme example of this failure of reverse dictionaries, note that the verb record does not rhyme with the noun record. Fourth, basing rhyme on the reverse linear arrangement of let- ters in words gives monotonically decreasing weight to the vowels and consonants as one moves from right to left in the word. This procedure does not capture the intuition that the vowel in the syllable bearing primary stress and the vowels following this syllable are more significant determiners of rhyme than are the conso- nants. For example, we feel that as a rhyme for urgency, fervency would be better than agency. A reverse dictionary, however, would choose the latter. More specifically, even if the difficulties associated with spell- ing differences were overcome, a reverse dictionary would still accord more weight to the /g/ consonant sound of agency than to the /or/ vowel sound of fervency, contrary to our intuitions. t As already indicated, our procedure uses word pronun- ciations rather than spellings as the basis for the rhyme dimension. A total of more than 120,000 pronuncia- tions from Webster's Seventh Collegiate Dictionary have been submitted to the encoding process. The first step in encoding replaces the symbols in the pronuncia- tion representations with single-byte codes representing phonetic segments. The procedure which maps seg- ments to byte codes also allows different segments to be mapped into a single code, in effect defining equiv- alence classes of segments. For example, the French u sound in brut is mapped onto the same segment as the English long u sound in boot. This is the same mapping that most English speakers would make. In the mapping currently in use, all vowels are organized linearly according to the vowel triangle. At one end of the spectrum is the long e sound in beet (/i/). At the other end is the long u sound in boot (/u/). beet i \ /u boot bit I'~ ./U book bait e\ / o boat bat ~e ~/ o bought a pot The diphthongs are organized into two subseries, one for rising diphthongs and the other for falling ones. As with the vowels, each subseries is a linear arrangement of the diphthongs according to the position of the initial sound on the vowel triangle. The consonants are similarly or- ganizod into several subseries. There are voiced and voiceless stops, voiced and voiceless fricatives and affricates, nasals, and liquids. An important point about this mapping from pronun= elation patterns to phonetic segments is that it is flexible. Both the phonetic equivalence classes and the collating sequence can be easily changed. The system can thus serve as the basis for experimentation aimed at finding the precise set of phonetic encodings that yield the most convincing set of rhymes, 270 The second encoding step arranges the segments for a pronunciation in the order representing theft importance for determining rhyme. This ordering is also the subject of continuing experimentation. The current arrange- ment is as follows: (1) All segments preceding the syllable bearing pri- mary stress are recorded in the order that they occur in the pronunciation string. (2) All consonantal segments in and following the syllable beating primary stress are added to the en- coding in the order in which they occur. (3) All vocalic segments (vowels and diphthongs) in and following the syllable bearing primary stress are placed before any segments for trailing consonants in the final syllable. [f there are no trailing conso- nants in the final syllable, then these vocalic seg- ments are placed at the end of the encoding. Note that this scheme preserves the order of the seg- ments preceding the point of primary stress, as well as those in the final syllable. For words where primary stress occurs before the final syllable, the vowels are raised in importance (with respect to rhyming) over all consonants except final ones. This procedure allows us to capture the intuition that fervency is a better rhyme for urgency than agency. The final step in the encoding procedure reverses the phonetic segment strings right-for-left, groups them ac- cording to the position of the syllable bearing primary stress (i.e., the distance of that syllable from the end of the word) and sorts the groups just as in the production of reverse dictionaries. The difference is that now neighbors in the resulting sorted list have a better chance of rhyming because of the use of pronunciations and the application of our intuitions about rhymes. We note that the resulting lists of rhymes are not perfect. This is so first because we have not completed the ex- periments which will result in an "optimal" set of int- uitions about the encoding process. One planned experiment will clarify the position of the schwa vowel in the vowel triangle. Another will study intervocalic 280 consonant clusters which, especially when they contain nasals or liquids, result in less successful rhymes. A third study will allow us to identify "discontinuity" in the rhyme List, across which rhyming words ate very unlikely to be found. In Figure 1., a discontinuity seems to occur between currency and trustworthy. The second reason that our rhyme lists ate not perfect is that it is unlikely that any single dimension will be sufficient to guarantee that all and only good rhymes for a given word will appear adjacent to that word in the dimension's order, if only because different people disa- gree on what constitutes "good" rhyme. Examples We give two sequences of words selected from the WordSmith RHYME dimension. antiphonary dictionary seditionaty expeditionary missionary These fi,~ words have their primary stress in the forth syllable from the right, and they also have the same four vowel sounds from that point onwards. Notice that the spelling of antiphonary would place it quite far from the others in a standard reverse dictionary. In addition, the extra syllables at the beginning of antiphonary, seditionary, and expeditwnary are irrelevant for deter- mining rhyme. write wright rite right These four words, each a homonym of the others, share a single record in the rhyming index and are therefore adjacent in the WordSmith RHYME dimension. 3. Pronunciation of Unknown Words Reading aloud is a complex psycholinguistic process in which letter strings ate mapped onto phonetic repres= entations which, in turn, are converted into articulatory movements. Psycholinguists have generally assumed (Forster and Chambers, 1973) that the mapping from letters to phonemes is mediated by two processes, one based on rules and the other based on retrieval of stored pronunciations. For example, the rule ea -> /i/ con- verts the ea into the long e sound of leaf. The other process, looking up the stored pronunciation of a word, is responsible for the reader's rendering of deaf as/d~f/, despite the existence of the ea ->/i/rule. Both proc- esses are believed to operate in the pronunciation of known words (Rosson, 1985). Until recently, it was generally assumed that novel words or pseudowords (letter strings which are not real words of English but which conform to English spelling patterns, e.g.. heat') are pronounced solely by means of the rule process because such strings do not have stored representations in the mental lexicon. Hcwever, Glushko (1979) has demonstrated that the pronuncia- tion of a pseudoword is influenced by the existence of lexical "neighbors." i.e., real words that strongly resem- ble the pseudoword. Pseudowords such as heal, whose closest neighbors (leaf and deaf) have quite different pronunciations, take longer to read than pseudowords such as hean, all of whose close neighbors have similar pronunciations (dean, lean, mean, etc.). (It has been assumed that words which differ only in initial conso- nants are "closer" neighbors than those which differ in other segments.) Giushko has also demonstrated an ef- fect of lexical neighbors on the pronunciation of familiar words of English. The picture that emerges from this psychological work depicts the retrieval process as selecting all stored words which are similar to a given input. If the input is not found in this set (i.e., the input is a novel word or pseudoword), its pronunciation is generated by analogy from the pronunciations that are found. Analogical processing must take note of the substring common to the input and its neighbors (ean in the case of hean), use only this part of the pronunciation, and make provision for pronouncing the substring which is different (h). When the pronunciations of the lexical neighbors are consistent, the pronunciation of the pseudoword can be generated by the reader more quickly than when the pronunciations are inconsistent. There are of course many unanswered questions about how readers actually generate pronunciations by anal- ogy. One approach to answering the questions is to build a computational system that can use various strat- egies for finding lexical neighbors, combining partial pronunciations, etc., and then compare the output of the system to the pronunciations produced by human read- ers. The following is an outline of such a computational system. Two WordSmith files will be used to support a proposed program that generates pronunciations for unknown words based on stored pronunciations of known words. The fh'st is a main file which is keyed on the spelling of words and which contains pronunciations organized ac- cording to part of speech. This is the file which sup- ported the PRONUNC and RHYME WordSmith functions described earlier. In this file, each pronuncia- tion of a word has stored with it a mapping from its phonetic segments onto the letters of the spelling of the word. These mappings were generated by a PROLOG program that uses 148 spelling-to-pronunciation rules for English (e.g.. ph ->/f/). The second file is an index to the main file keyed on reverse spelling. This file is equivalent to the one which supports the REVERSE WordSmith dimension shown in Figure I. The strategy for generating a pronunciation for an un- known word is to find its lexical neighbors and produce a pronunciation "by analogy" to their pronunciations. The procedure is as follows: (a) Segment the spelling of the unknown word into substrings. (b) Match each sub- string to part of the spelling of a known word (or words). (c) Consult the spelling-to-pronunciation map to find the pronunciation of the substring. (d) Combine the pronunciations of the substrings into a pronunciation for the unknown word. These steps are illustrated below for the unknown word brange. (a) Segmentation brange <- - • initial substring <---> final substring Strategies for segmentation will be discussed later. 281 (b) Matching bran is the longest initial substring in brange that matches a word-initial substring in the dic- tionary. The word bran is a dictionary entry, and 20 other words begin with this string. range is the longest final substring in brange that matches a word-final substring in the diction- ary. The match is to the word range. In the reverse spelling Fde, 22 other words end in ange. (c) Pronunciation of substrings All 21 words that have the initial string match for bran have the mapping b r a n I I I I b r aD n In 20 of the 23 words that match word-final ange, the mapping is a n ge I I I • n j as in range(/renj/) The other three words are flange (/aenj/), or- ange (/Inj/), and melange (/anj/). (d) Combining pronunciations From the substring matches, the pronunciations of/b/,/r/,/n/,/g/, and/e/are obtained in a straightforward manner, but pronunciation of the vowel a is not the same in the bran and ange substrings. Thus, two different pronunciations emerge as the most likely renderings of brange. (i) below is modelled after range or change, and (ii) is modelled after bran or branch. (i) b r a n g e I I I I I b r e n j (ii) b r a n ge I I I I I b r ~ n j Here, pronunciation by analogy yields two conflicting outcomes depending upon the word model selected as the lexical neighbor. If peo- ple use similar analogical strategies in reading, then we might expect comparable disagree- ments in pronunciation when they are asked to read unfamiliar words. A very informal survey we conducted suggests that there is consider- able disagreement over the pronunciation of brange. About half of those we asked preferred pronunciation (i), while the others chose (ii). In the example shown above, segmentation is driven by the matching process, i.e. the substrings chosen are the longest which can be matched in the main file and the reverse spelling f'fle. There are, of course, other possible strategies of segmentation, including division at syllable boundaries and division based on the onset-rhyme structure within the syllable (for brange, br + angel Evaluation of these alternative methods must await fur- ther experimentation. There are other outstanding questions related to the Matching and Combining steps. If matches cannot be found for initial and final substrings that overlap (as in the example) or at least abut, then information about the pronunciation of an internal substring will be missing. Finding a match for an internal substring requires either a massive indexing of the dictionary by letter position, a time consuming search of the standard indexes, or the development of a clever algorithm. With regard to combining substring pronunciations, the problem of pri- mary stress assignment arises when primary stress is ab- sent from all of the substrings or is present at different locations in two or more of them. Finally, there is a question of the weight that should be assigned to alter- native pronunciations generated by this procedure. Should a match to a high frequency word be preferred over a match to a low frequency word? Is word fre- quency more important than the number of matching substrings which have the same pronunciation? These are empirical psycholinguistic questions, and the an- swers will no doubt help us generate pronunciations that more closely mirror those of native English speakers. 4. Conclusion The two applications described here, finding rhyming words and generating pronunciations for unknown words, represent some ways in which the tools of com- putational linguistics can be used to address interesting psycholinguistic questions about the representation of words. They also show how answer~ to these psycholinguistic questions can, in turn, contribute to 282 work in computational linguistic.s, in this case to devel- opment of the WordSmith on-line dictionary. Acknowledgements We are grateful to Barbara Kipfer for her preliminary work on the syllabification of unknown words, and to Yael Ravin and Mary Neff for comments on earlier ver- sions of this report. References Forster, K. and Chambers, S. (1973), Lexical access and naming time. Joutmal of Verbal Learning and Verbal Behavior, 12, 627-635. Giushko, R. (1979), The organization and activation of orthographic knowledge in reading aloud. Jour- nal of Experimental Psychology, 5,674-691. Rickert, W.E. (1978), Rhyme terms. Style, 12(1), 35-46. Rosson, M.B. (1985), The interaction of pronunciation rules and lexical representations in reading aloud. Memory and Cognition, in press. Thomas, J., Klavans, J., Nartey, J., Pickover, C., Reich, D., and Rosson, M. (1984), WALRUS: A de- velopment system for speech synthesis. IBM Research Report RC-10626. Walker, J. (1924), The Rhyming Dictionary, Routledge and Kegan Paul, London. Webster's Seventh Collegiate Dictionary (1967), Merriam, Springfield, Massachusetts. 283
1985
34
Towards a Self-Extending Lexicon* Uri Zernik Michael G. Dyer Artificial Intelligence Laboratory Computer Science Department 3531 Boelt~r Hall University of tMifomis Los Angeles, California 90024 Abstract The problem of manually modifying the lexicon appears with any natural language processing program. Ideally, a program should be able to acquire new lexieal entries from context, the way people learn. We address the problem of acquiring entire phrases, specifically Jigurative phr~es, through augmenting a phr~al lezico~ Facilitating such a self-extending lexicon involves (a) disambiguation~se|ection of the intended phrase from a set of matching phrases, (b) robust parsin~-comprehension of partially-matching phrases, and (c) error analysis---use of errors in forming hy- potheses about new phrases. We have designed and im- plemented a program called RINA which uses demons to implement funetional-~rammar principles. RINA receives new figurative phrases in context and through the appli- cation of a sequence of failure-driven rules, creates and refines both the patterns and the concepts which hold syntactic and semantic information about phrases. David vs. Goliath Native: Learner: Native: Learner: Native: Learner: Native: Remember the s~ory of David and Goliath? David took on Goliath. David took GoltLth sons,here? No. David took on Goliath. He took on him. He yon the fight? No. He took him on. David attacked him. He ~ok him on. He accepted She challenge? Right. Native: Learner: Here in annt,her story. John took on the third exam question. He took on a hard problem. Another dialogue involves put one's foot do~-a. Again, the phrase is unknown while its constituents are known: Going Punk 1. Introduction A language understanding program should be able to acquire new lexical items from context, forming for novel phrases their linguistic patterns and figuring out their conceptual meanings. The lexicon of a learning program should satisfy three requirements: Each lexical entry should (1) be learnable, (2) facilitate conceptual analysis, and (3) facilitate generation. In this paper we focus on the first two aspects. 1.1 The Task Domain Two examples, which will be used throughout this paper, are given below. In the first dialogue the learner is introduced to an unknown phrase: take on. The words take and on are familiar to the learner, who also remembers the biblical story of David and Goliath. The program, modeling a language learner, interacts with a native speaker, as follows: * This work w~s made possible in part by s grant from the Keck Foundation. Native: Learner: Native: Learner: Jenny vant,ed ~o go punk, but, her father put, his toot dovu. He moved his foot dora? It, doen not, mike sense. No. He put his foot, dora. He put his foot dovu. He refused to let her go punk. A figurative phrase such as put one's fooc down is a linguistic pattern whose associated meaning cannot be produced from the composition of its constituents. Indeed, an interpretation of the phrase based on the meanings of its constituents often exists, but it carries a different meaning. The fact that this literal interpreta- tion of the figurative phrase exists is a misleading clue in learning. Furthermore, the learner may not even notice that a novel phrase has been introduced since she is fam- iliar with dram as well as with foot. Becker [Becker?5] has described a space of phrases ranging in generality from fixed proverbs such as charity begsns at, home through idioms such as Xay dove t,he tar and phrasal verbs such as put, up rich one's spouse and look up the name, to literal verb phrases such as sit, on she chair. He suggested employing a phrasal lexicon to capture this entire range o( language structures. 284 1.2 Issues in Phrase AequLsition Three issues must be addressed when learning phrases in context. (I) Detecting failures: What are the indications that the initial interpretation of the phrase take him on as "to take a person to a location" is incorrect? Since all the words in the sentence are known, the problem is detected both as a conceptual discrepancy (why would he take his enemy anywhere?) and as a syn- tactic failure (the expected location of the assunied physical transfer is missing). (2) Determining scope and generality of patterns: The linguistic pattern of a phrase may be perceived by the learner at various levels of generalit~l. For ex- ample, in the second dialogue, incorrect generaliza- tions could yield patterns accepting sentences such as: Her boss put his left foot down. He moved his foot dora. He put down his foot. He put dovn his leg. (3) A decision is also required about the scope of the pattern (i.e., the tokens included in the pattern). For instance, the scope of the pattern in John put up with Mary could be (I) ?x:persoa put:verb up where with is associated with l'lmry or (2) ?x:persos put:verb up with ?y:persou, where with is associated with put up. Finding appropriate meanings: The conceptual meaning of the phrase must be extracted from the context which contains many concepts, both ap- propriate and inappropriate for hypothesis forma- tion. Thus there must be strategies for focusing on appropriate elements in the context. 1.3 The Program RINA [Dyer85] is a computer program designed to learn English phrases. It takes as input English sentences which may include unknown phrases and conveys as out- put its hypotheses about novel phrases. The pro~am consists of four components: (l) Phrasal lexicon: This is a list of phrases where each phrase is a declarative pattern-concept pair [WilenskySl]. (2) Case-frame parser: In the parsing process, case- frame expectations are handled by spawning demons [Dyer83]. The parser detects comprehension failures which are used in learning. (3) Pattern Constructor: Learning of phrase patterns is accomplished by analyzing parsing failures. Each failure situation is associated with a pattern- modification action. (4) Concept Constructor: Learning of phrase concepts is accomplished by a set of strategies which are selected according to the context. Schematically, the program receives a sequence of sentence/contezt pairs from which it refines its current pattern/concept pair. The pattern is derived from the sentence and the concept is derived from the coLtext. However, the two processes are not independent since the context influences construction of patterns while linguistic clues in the sentence influence formation of concepts. 2. Phrasal Representation of the Lexicon Parsing in RINA is central since learning is evaluated in terms of parsing ability before and after phrases are acquired. Moreover, learning is accomplished through parsing. 2.1 The Background RINA combines elements of the following two ap- proaches to language processing: Phra~-bued pattern matching: In the imple- mentation of UC [Wilensky84], an intelligent help system for UNIX users, both PHRAN [AJ'ens82 l, the conceptual analyzer, and PHRED [Jacobs85] the generator, share a phrasal lepton. As outlined by Wilensky {Wilensky81] this lexicon provides a declarative database, being modu- larly separated from the control part of the system which carries out parsing and generation. This development in representation of linguistic knowledge is paralleled by theories of functional grammars {Kay79[, and lezical- functional grammars [Bresnan78]. Ca~,-b,,-,,ed demon pmming: Boris [DyerS3 I modeled reading and understanding stories in depth. Its conceptual analyzer employed demon-based templates for parsing and for generation. Demons are used in pars- ing for two purposes: (1) to implement syntactic and se- mantic expectations [Riesbeck74] and (2) to implement memory operations such as search, match and update. This approach implements Schank's [Schank77] theory of representation of concepts, and follows case-grammar [Fillmore681 principles. RINA uses a declarative phrasal lexicon as sug- gested by Wilensky [Wilensky82], where a lexical phrase is a pattern-concept pair. The pattern notation is described below and the concept notation is Dyer's [Dyer83] i-link notation. 285 2.2 The Pattern Notation To span English sentences, RINA uses two kinds of patterns: lezical patterns and ordering patterns [Arens82]. In Figure I we show sample lexical patterns (patterns of lexical phrases). Such patterns are viewed as the generic linguistic forms of their corresponding phrases. I. ?x: (animate.a~ent) nibble :verb <on ?y: food> 2. ?z: Cpernou.Lgent) tLke:verb on ?y:p,tlent 3. ?x: (person.a~ent) <put:verb foot:body-part do~m> Figure h The Pattern Notation The notation is explained below: (t) A token is a literal unless otherwise specified. For ex- ample, on is a literal in the patterns above. (2) ?x:sort denotes a variable called .~x of a semantic type sort. ?y:food above is a variable which stands for references to objects of the semantic class food. (3) Act.verb denotes any form of the verb s!lntactic class with the root act. nibble:vet6 above stands for expressions such as: nibbled, hms never nibbled, etc. (4) By default, a pattern sequence does not specify the order of its tokens. (5) Tokens delimited by < and > are restricted to their specified order. In Pattern I above, on must directly precede ?y:food. Ordering patterns pertain to language word-order con- ventions in general. Some sample ordering patterns are: active: <?x:agenr. ?y: (verb.~tive)> passive: <?x:pattent ?y: (verb.p~,.s£ve)> *<by ?Z : agent> infinitive:<to ?x: verb. active> "?y: Iq~ent Figure 2: Ordering Patterns The additional notation introduced here is: (6) An * preceding a term, such as *<by ?z:~ent> in the first pattern above indicates that the term is op- tional. (7) * denotes an omitted term. The concept for Ty in the third example above is extracted from the agent of the pattern including the current pattern. (8) By convention, the agent is the case-frame which precedes the verb in the lexical pattern. Notice that the notion of agent is necessary since (a) the agent is not necessarily the subject (i.e., she vu taken) and {b) the agent is not necessarily the actor {i.e., she received the book, he took a blo~), and (c) in the infinitive form, the agent must be referred to since the agent is omitted from the pattern in the lexicon. (9) Uni/ieation [Kay79] accounts for the interaction of lexical patterns with ordering patterns in matching input sentences. So far, we have given a declarative definition of our grammar, a definition which is neutral with respect to ei- ther parsing or generation. The parsing procedure which is derived from the definitions above still has to be given. 2.3 Parsing Objectives Three main tasks in phrasal parsing may be identified, ordered by degree of difficulty. (1) Phrase dlaambiguation: When more than one lexi- cat phrase matches the input sentence, the parser must select the phrase intended by the speaker. For example, the input the vorkeru took to the streets could mean either "they demonstrated" or "they were fond of the streets'. In this case, the first phrase is selected according to the principle of pattern speci]icit 9 [Arens821. The pattern ?X: person taXe:verb <to the streets> is more specific then ?x:person take:verb <to ?y:thing> However, in terms of our pattern notation, how do we define pat- tern specificity? {2) Ill-formed input comprehension: Even when an input sentence is not well phrased according to text- book grammar, it may be comprehensible by people and so must be comprehensible to the parser. For example, John took Nary school is telegraphic, but comprehensible, while John took Nzry to conveys only a partial concept. Partially matching sentences (or "near misses') are not handled well by syntax- driven pattern matehers. A deviation in a function word (such as the word to above) might inhibit the detection of the phrase which could be detected by a semantics-driven parser. (3) Error-detection: when the hypothesized phrase does not match the input sentence/context pair, the parser is required to detect the failure and return with an indication of its nature. Error analysis re- quires that pattern tokens be assigned a case- significance, as shown in Section 4. Compounding requirements--disambiguation plus error-analysis capability-- complicate the design of the parser. On one hand, analysis of "near misses" (they bury a hatchet instead of they buried the hatchet) can 288 be performed through a rigorous analysis--assuming the presence of a single phrase only. On the other hand, in the presence of multiple candidate phrases, disambigua- finn could be made efficient by organizing sequences of pattern tokens into a discrimination net. However, at- tempting to perform both disambiguation and "near miss" recognition and analysis simultaneously presents a difficult problem. The discrimination net organization would not enable comparing the input sentence, the "near miss", with existing phrases. The solution is to organize the discrimination se- quence by order of generality from the general to the specific. According to this principle, verb phrases are matched by conceptual features first and by syntactic features only later on. For example, consider three ini- tial erroneous hypotheses: (a) bury a hatchet (b) bury the gun, and (c) bury the hash. On hearing the words "bury the hatchet', the first hypothesis would be the easiest to analyze (it differs only by a function word while the second differs by a content-holding word) and the third one would be the hardest (as opposed to the second, huh does not have a common concept with hlttchet). 2.4 Case-Frames Since these requirements are not facilitated by the representation of patterns as given above, we slightly modify our view of patterns. An entire pattern is con- structed from a set of case-/tames where each case-frame is constructed of single tokens: words and concepts. Each frame has several slots containing information about the case and pertaining to: (a) its syntactic ap- pearance (b) its semantic concept and (c) its phrase role: agent, patient. Variable identifiers (e.g., ?x. ?y) are used for unification of phrase patterns with their corresponding phrase concepts. Two example patterns are given below: The first example pattern denotes a simple literal verb phrase: {id:?x class:person role:agent} (take:verb) (id:?y class:person role:patient} {id:?z class:location marker:to} Figure 3: Cue Frmmes for "He took her to school" Both the agent and the patient are of the class person; the indirect object is a location marked by the preposi- tion co. The second phrase is figurative: {id:?x class:person role:agent) {take:verb} (marker:to determiner:the word:streets} Figure 4: Case Frames for "He took to the streets" The third case frame in Figure 4 above, the indirect ob- ject, does not have any corresponding concept. Rather it is represented as a sequence of words. However the words in the sequence are designated as the marker, the determiner and the word itself. Using this view of patterns enables the recognition of "near misses" and facilitate error-analysis in parsing. 3. Demons Make Patterns Operational So far, we have described only the linguistic nota- tion and indicated that unification [Kay79] accounts for production of sentences from patterns. However, it is not obvious how to make pattern unification operational in parsing. One approach [Arens82] is to generate word se- quences and to compare generated sequences with the in- put sentence. Another approach IPereiraS01 is to imple- ment unification using PROLOG. Since our task is to provide lenient parsing, namely also ill-formed sentences must be handled by the parser, these two approaches are not suitable. In our approach, parsing is carried out by converting patterns into demons. Conceptual analysis is the process which involves reading input words left to right, matching them with existing linguistic patterns and instantiating or modify- ing in memory the associated conceptual meanings. For example, assume that these are the phrases for take: in the lexicon: ?x:person take:verb ?y:person ?z:locale John took her to Boston. ?x:person take:verb ?y:phys-obj He took the book. ?x:person take:verb off ?y:attire He took off his coaL. ?x:person take:verb on ?y:person David took on Goliath. ?x:person take:verb a bow The actor took a boy. ?x:thing take:verb a blow The vail took a blov. ?x:person take:verb ~to the streets~ The vorkern ~ok t,o the streets. The juvenile took t,o the e~reeCs. Figure 5: A Variety of Phrases for TAKE where variables ?x, :y and ?z also appear in correspond- in& concepts (not shown here). How are these patterns 287 actually applied in conceptual analysis? 3.1 Interaction of Lexlcal and Ordering Patterns Token order in the lexical patterns themselves (Figure 5) supports the derivation of simple active-voice sentences only. Sentences such as: Msry vas ~,zken on by John. A veak contender David might, have left, alone, bu~ Goliath he book on. David dec£ded to take on Gol'tath. Figure 6: A Variety of Word Orders cannot be derived directly by the given hxical patterns. These sentences deviate from the order given by the corresponding lexical patterns and require interaction with language conventions such as passive voice and infinitive. Ordering patterns are used to span a wider range of sentences in the language. Ordering patterns such as the one's given in Figure 2 depict the word order involving verb phrases. In each pattern the case-frame preceding the verb is specified. (In active voice, the agent appears imediately before the verb, while in the passive it is the patient that precedes the verb.) 3.2 How Does It All Work? Ordering patterns are compiled into demons. For example, DAGENT, the demon anticipating the agent of the phrase is generated by the patterns in Figure 2. rt has three clauses: If the verb is in active form then the agent is immediately be/ore the verb If the verb is in passive form then the agent may appear, preceded by by. If the verb is in infinitive then the agent is omitted. Its concept is obtained from the function verb. Figure T: The Conatruction of D_AGENT In parsing, this demon is spawned when a verb is en- countered. For example, consider the process in parsing the sentence Da.v~.d dec'ideal ~ bake on ~,o].£ath. Through identifying the verbs and their forms, the pro- tess is: decided (active, simple) Search for the agent before the verb, anticipate an infinitive form. talc, (active, infinitive) Do not anticipate the agent. The actor of the "take on" concept which is the agent, is extracted from the agent of "decide'. 4. Failure-Driven Pattern Construction Learning of phrases in RINA is an iterative pro- tess. The input is a sequence of sentence-context pairs, through which the program refines its current hypothesis about the new phrase. The hypothesis pertains to both the pattern and the concept of the phrase. 4.2 The Learning Cycle The basic cycle in the process is: (a) A sentence is parsed on the background of a concep- tual context. (b) Using the current hypothesis, either the sentence is comprehended smoothly, or a failure is detected. (c) If a failure is detected then the current hypothesis is updated. The crucial point in this scheme is to obtain from the parser an intelligible analysis of failures. As an example, consider this part of the first dialog:. 1 Program: tie took on him. He von ~he fight? 2 User:. No. He took him on. Dav'[d Lt, ta, cked him. 3 Program: He took him on. He accepted the challenge? The first hypothesis is shown in Figure 8. pattern: concept: ?x:person take:verb don ?y:person~ ?x win the conflict with ?y Figure 8: First Hypothesis Notice that the preposition on is attached to the object ?y, thus assuming that the phrase is similar to He looked at Iqaar7 which cannot produce the following sentence: H. look.d her at. This hypothesis underlies Sentence 1 which is erroneous in both its form and its meaning. Two observations should be made by comparing this pat- tern to Sentence 2: The object is not preceded by the preposition on. The preposition on does not precede any object. These comments direct the construction of the new hy- pothesis: 288 pattern: concept: ?x:person take:verb on ?y:person ?x win the conflict with ?y Figure 9: Second Hypothesis where the preposition on is taken as a modifier of the verb itself, thus correctly generating Sentence 3. In Fig- ure 9 the conceptual hypothesis is still incorrect and must itself be modified. 4.3 Learning Strategies A subset of RINA's learning strategies, the ones used for the David and OoliaCh Dialog (Section 1.1) are described in this section. In our exposition of failures and actions we will illustrate the situations involved in the dialogues above, where each situation is specified by the following five ingredients: (1) the input sentence (Sentence), (2) the context (not shown explicitly here), (3} the active pattern: either the pattern under con- struction, or the best matching pattern if this is the first sentence in the dialogue (Patternl). (4) the failures detected in the current situation (Failures), (5) the pattern resulting from the application of the ac- tion to the current pattern (Pattern2). Creating a New Phrase A case.role mismatch occurs when the input sen- tence can only be partially matched by the active pat- tern. A 9oal mismatch occurs when the concept instan- tinted by the selected pattern does not match the goal si- tuation in the context. Sentence: Patternt: Failures: Pattern2: David took on Goliath. ?x:person take:verb ?y:person ?z:location Pattern and goal mismatch ?x:person take:verb David's physically transferring Goliath to a loca- tion fails since {1) a location is not found and (2) the ac- tion does not match David's goals. If these two failures are encountered, then a new phrase is created. In ab- sence of a better alternative, RINA initially generates David Cook him somevhere. Discriminating a Pattern by Freezing a Prepoab tional Phrase A prepoMtional mismatch occurs when a preposi- tion P matches in neither the active pattern nor in one of the lexical prepositional phrases, such as: <on ?x:platform> (indicating a spatial relation) <on ?x:time-unit> (indicating a time of action) <on ?x:location> (indicating a place) Sentence: Patternl: Failures: Pattern2: David took on Goliath. ?x:person take:verb Prepositional mismatch ?x:person take:verb <on ?y:person> The preposition on is not part of the active pat- tern. Neither does it match any of the prepositional phrases which currently exist for on. Therefore, since it cannot be interpreted in any other way, the ordering of the sub-expression <on ?y,:peraoa> is frozen in the larger pattern, using < and >. Two-word verbs present a di~culty to language learners [Ulm75] who tend to ignore the separated verb- particle form, generating: take on him instead of cake him o,s. In the situation above, the learner produced this typical error. Relaxing an Undergeneralized Pattern Two failures involving on: (1) case-role mismatch (on ?y:p,r6oa is not found)and (2) prepositional mismatch (on appears unmatched at the end of the sentence) are encountered in the situation below: Sentence: Patte~at: Failures: Pattern2: David took him on. ?x:person take:verb <on ?y'person Prepositional and case-role mismatch. ?x:person take:verb on ?y:person The combination of these two failures indicate that the pattern is too restrictive. Therefore, the < and > freezing delimiters are removed, and the pattern may now account for two-word verbs. In this case on can be separated from ¢,&ke. Generaiising a Semantic Restriction A semantic mismatch is marked when the seman- tic class of a variable in the pattern does not subsume the class of the corresponding concept in the sentence. Sentence: Patternt: Failures: Pattern2: John took on the third question. ?x:person take:verb on ?y:person Semantic mismatch ?x:person take:verb on ?y:task As a result, the type of ?y in the pattern is generalized to include both cases. 289 Freezing a Reference Which Relates to a Metaphor An unrelated reference is marked when a reference in the sentence does not relate to the context, but rather it relates to a metaphor (see elaboration in [Zernik85] ). The reference his fooc cannot be resolved in the con- text, rather it is resolved by a metaphoric gesture. Sentence: Pattern1: Failures: Pattern2: Her father put his foot down. ?x:person put:verb down ?y:phys-obj Goal mismatch and unrelated reference ?x:person put:verb down foot:body-part Since, (I) putting his foot on the floor does not match any of the goals of Jenny's father and (2) the reference his foot is related to the domain of metaphor- ic gestures rather than to the context. Therefore, foot becomes frozen in the pattern. This method is similar to a method suggested by Fuss and Wilks [Fuss83]. In their method, a metaphor is analyzed when an apparently ill- formed input is detected, e.g.: the car drank ffi lot of gas. 4.4 Concept Constructor Each pattern has an associated concept which is specified using Dyer's [Dyer83] i-link notation. The con- cept of a new phrase is extracted from the context, which may contain more than one element. For example, in the first dialogue above, the given context contains some salient sto W points [Wilensky82] which are indexed in episodic memory as two violated expectations: • David won the fight in spite of Goliath's physical su- periority. • David accepted the challenge in spite of the risk in- volved. The program extracts meanings from the given set of points. Concept hypothesis construction is further dis- cussed in [Zernik85]. 5. Previous Work in Language Learning In RINA, the stimulus for learning is comprehen- sion failure. In previous models language learning was ,~lso driven by detection of failures. PST [Reeker76] learned grammar by acting upon dilfercnces detected between the input sentence and internally generated sentences. Six types of differences were classified, and the detection of a difference which belonged to a class caused the associated alteration of the grammar. FOUL-UP [Granger771 learned meanings of single words when an unknown word was encountered. The meaning was extracted from the script [Schank77] which was given as the context. A typical learning situation was The cffir vas driving on Hvy 66, vhen it careened off the road. The meaning of the unknown verb care.ned was guessed from the SACCIDENT script. POLITICS [CarbonellTO], which modeled comprehension of text involving political concepts, ini- tiated learning when semantic constraints were violated. Constraints were generalized by analyzing underlying metaphors. AMBER [Langley82] modeled learning of basic sentence structure. The process of learning was directed by mismatches between input sentences and sentences generated by the program. Learning involved recovery from both errors of omission (omitting a function word such as the or is in daddy bouncing ball) and errors of commission (producing daddy is liking dinner). Thus, some programs acquired linguistic patterns and some programs acquired meanings from context, but none of the above programs acquired new phrases. Ac- quisition of phrases involves two parallel processes: the formation of the pattern from the given set of example sentences, and the construction of the meaning from the context. These two processes are not independent since the construction of the conceptual meaning utilizes linguistic clues while the selection of pattern elements of new figurative phrases bears on concepts in the context. 6. Current and Future Work Currently, RINA can learn a variety of phrasal verbs and idioms. For example, RINA implements the behavior of the learner in vffivtd vs. c, oliffich and in Go- £ng Punk in Section 1. Modifications of lexicM entries are driven by analysis of failures. This analysis is similar to analysis of ill-formed input, however, detection of failures may result in the augmentation of the lexicon. Failures appear as semantic discrepancies (e.g., goal-plan mismatch}, or syntactic discrepancies (e.g., case-role mismatch). Finally, references in figurative phrases are resolved by metaphor mapping. Currently our efforts are focussed on learning the conceptual elements of phrases. We attempt to develop strategies for generalizing and refining acquired concepts. For example, it is desirable to refine the concept for "take on" by this sequence of examples: David toak on Goliath. The [t, kers took on ~he Celtics. I took on a, bard ~ffi,,.k. I took on a, hey Job. In selecting ~he naae °TQvard8 a. Self-EzCending LeXiCOne. Ye t,43olc OU in old nKme. 29O The first three examples "deciding to fight someone', "playing against someone" and "accepting a challenge" could be generalized into the same concept, but the last two examples deviate in their meanings from that developed concept. The problem is to determine the desired level of generality. Clearly, the phrases in the following examples: ~sdce on am enemy Lake os an old name ~a~e on the shape of a essdce deserve separate entries in the phrasal lexicon. The question is, at what stage is the advantage of further generalization diminished? Acknowledgments We wish to thank Erik Muelhr and Mike Gasser for their incisive comments on drafts of this paper. References {ArensS2J [Becker75] [Bresnan78] [Carbonel179] Areas, Y., "The Context Model: Language Understanding in a Con- text," in Proceedings Fourth Annual Conference of the Cofnitive Science So- ciety, Ann Arbor, Michigan (1982}. Bucker, Joseph D., "The Phrasal Lexi- con," pp. 70-73 in Proceedings Interdis- ciplinary Workshop on Theoretical Is. sues in Natural Lanfaage Processing, Cambridge, Massachusets (June 1975). Bresnan, Joan, "A Realistic Transfor- mational Grammar," pp. 1-59 in Linguistic Theory and Psychological Reality, ed. M. Halle J. Bresnan G. Miller, MIT Press, Harvard, Mas- sachusets (1978). Carbonell, J. G., "Towards a Sell'- Extending Parser," pp. 3-7 in Proceed- ings 17th Annual Meeting of the Associ- ation for Computational Linfaistics, La Jolla, California (1070). [Dyer83] [Dyer8S] Dyer, Michael G., In-Depth Under- standing: A Computer Model of In- tegrated Processing for Narrative Comprehension, MIT Press, Cam- bridge, MA (1983). Dyer, Michael G. and Uri Zernik, "Parsing Paradignm and Language Learning," in Proceedings AI-85, Long Beach, California (May 1085). [Fasss3l [Fillmore681 [Granger77] [Jacobs85] [Kay791 [Langley82[ [PereiraS01 [Reeker76] [Riesbeck74[ [Schank77] Fans, Dan and Yorick Wilks, "Prefer- ence Semantics, IlbFormedness and Metaphor," American Journal of Com- putational Linguistics 0(3-4), pp.178- 1s7 (zoo). Fillmore, C., "The Case for Case," pp. l-g0 in Universals in Linguistic Theory, ed. E. Bach R. Harms, Holt, Reinhart and Winston, Chicago (1988). Granger, R. H., "FOUL-UP: A Pro- gram That Figures Out Meanings of Words from Context," pp. 172-178 in Proceedings Fifth [JCAI, Cambridge, Massachusets (August 1977). Jaeobs, Paul S., "PHRED: A Generator for Natural Language Interfaces," UCB/CSD 85/108,. Computer Science Division, University of California Berkeley, Berkeley, California (Janu- ary 1985). Kay, Martin, "Functional Grammar." pp. 142-158 in Proceedings 5th Annual Meeting of the Berkeley Linguistic So- ciety, Berkeley, California (1979). Langley, Pat, "Language Acquisition Through Error Recovery," Cognition and Brain Theory ~;(3), pp.211-255 {I082). Pereira, F. C. N. and David H. D. War- ren, "Definite Clause Grammars for Language Analysis- A Survey of the Formalism and a Comparison with Augmented Transition Networks." Artificial Intelligence 13, pp.231-278 (i~o). Reeker, L. H., "The Computational Study of Language Learning," in .Ad- vances in Computers, ed. M. Yovits M. Rubinoff, Academic Press, New York (1976). Riesbeck, C. K., "Computational Understanding: Analysis of Sentences and Context," Memo 238, AI Labora- tory (1974). Schank, Roger and Robert AbeLson, Scripts Plans Goals and Understanding, Lawrence Erlbaum Associates, Hills- dale, New Jersey (1977). 291 " {Ulm751 [Wilensky81] [Wilensky82] [Wilensky84] [Zernik85] Ulm, Susan C., "The Separation Phenomenon in English Phrasal Verbs, Double trouble," 601, University of California Los Angeles (1975). M.A. Thesis. Wilensky, R., "A Knowledge-Ba~ed Approach to Natural Language Pro- eessing:. A progress Report," in Proceedings Seventh International Joint Conference on Artificial Intelligence, Vancouver, Canada (1981). Wilensky, R., "Points: A Theory of Structure of Stories in Memory," pp. 345-375 in Strategies for Natural Lanfaage Processing, ed. W. G. Lehnert M. H. Ringle, Laurence Erl- banm Associates, New Jersey (1982). Wilensky, R., Y. Arens, and D. Chin, "Talking to UNIX in English: an Over- view of UC," Communications of the ACM 2T(6), pp.574.-593 (June 1984). Zernik, Uri and Michael G. Dyer, Failure-Driven Aquisition of Fifarative Phrasea by Second Language Speakers, 1985. (submitted to publication). 292
1985
35
GRAMMATICAL ANALYSIS BY COMPUT~ OF THE LANCASTER-OSLO/BERGEN (LOB) CORPUS OF BRITISH ~NGLISH TEXTS. Andrew David Beale Unit for Computer Research on the English Language Bowland College, University of Lancaster Bailrigg, Lancaster, England LA1 aYT. ABSTRACT Research has been under way at the Unit for Computer Research on the ~hglish Language at the University of Lancaster, England, to develop a suite of computer programs which provide a detailed grammatical analysis of the LOB corpus, a collection of about 1 million words of British English texts available in machine readable form. The first phrase of the pruject, completed in September 1983, produced a grammatically annotated version of the corpus giving a tag showing the word class of each word token. Over 93 per cent of the word tags were correctly selected by using a matrix of tag pair probabilities and this figure was upgraded by a further 3 per cent by retagging problematic strings of words prior to disambiguation and by altering the probability weightings for sequences of three tags. The remaining 3 to ~ per cent were corrected by a human post-editor. The system was originally designed to run in batch mode over the corpus but we have recently modified procedures to run interactively for sample sentences typed in by a user at a terminal. We are currently extending the word tag set and improving the word tagging procedures to further reduce manual intervention. A similar probabilistic system is being developed for phrase and clause tagging. ~qE STI~JCTURE A~D PURPOSE OF THE LOB CORPUS. The LOB Corpus (Johansson, Leech and Goodluck, 1978), like its American ~/gl~sh counterpart, the Brown Corpus LKucera and Francis, 196a; Hauge and ;Iofland, 1978), is a collection of 500 samples of British ~hglish texts, each containing about 2,000 word tokens. The samples are representations of 15 different ~ext categories: A. Press (Reportage); B. Press (Editorial); C. Press (Reviews); D. Religion; E. ~ills and Hobbies; F. Popular Lore; G. Belles Lettres, Biography, r'[emoirs, 293 etc. ; H. Miscellaneous ; J. Learned and Scientific; K. General Fiction; L. Mystery and Detective Fiction; M. Science Fiction; N. Adventure and Western Fiction, Romance and Love Story; R. Humour. There are two main sections, informative prose and imaginative prose, and all the texts contained in the corpus weee printed in a single year (1961). The structure of the LOB corpus was designed to resemble that of the Brown corpus as closely as possible so that a systematic comparison of British and American written English could be made. Both corpora contain samples of texts published in the same year (1961) so that comparisons are not distorted by diachronic factors. The LOB corpus is used as a database for linguistic research and language description. Historically, different ]inguists have been concerned to a greater or lesser extent with the use of corpus citations, to some degree, at least, because of differences in the perceived view of the descriptive requirements of grammar. Jespersen (1909-A9), Kruisinga and Erades (1911) gave frequent examples of citations from assembled corpora of written texts to illustrate grammatical rules. Work on text corpora is, of course, very much alive toda~v. Storage, retrieval and processing of natural language text is a more efficient and less laborious task with modern computer hardware than it was with hand-written card files but data capture is still a significant problem (Francis, 1980). The forthcoming work, A Comprehensive Grammar of the ~Elish Lan~la~e (Quirk, Greenbaum, leech, and ~arr.vik, 1985) contains many citations from both LOB and Brown Corpora. A GRAF~ATICALLY ANNOTA~ VERSION OF ~E CORPUS Since 1981, research has been directed towards writing programs to grammatically annotate the LOB cor~is. From 1981-83, the research effort produced a version of the corpus with every word token labelled by a grammatical tag showing the word class of each word form. Subsequent research has attempted to build on the techni~les used for automatic word tagging by using the output from the word tagging programs as input to phrase and clause tagging and by using probabilistic methods to provide a constituent analysis of the LOB corpus. ~e programs and data files used for word tagging were developed from work done at Brown University (Greene and BAbin, 1971). Staff and research associates at Lancaster undertook the programming in PASCAL while colleagues in Oslo revised and extended the lists used by Greene and R~bin (op.cit.) for word tag assignment. Half of the corpus was post-edited at Lancaster and the other half at the Norwegian Computing Centre for the Humanities. How word tagging works. ~he major difficulties to be encountered with word tagging of written English are the lack of distinctive inflectional or derivational endings and the large proportion of word forms that belong to more than one word class. ~hdings such as -able, -ly and -ness are graphic realizations"---of morphologlc'-~l units indicating word class, but they occur infrequently for the purposes of automatic word tag assignment; the reader will be able to establish exceptions to rules assigning word classes to words with these suffixes, because the characters do not invariably represent the same morphemes. The solution we have adopted is to use a look up procedure to assign one or more potential ~ags to each input word. ~e appropriate word tag is then selected for words with more than one potential tag by ca]culatLug the probability of the tag's occurrence ~iven neighbouring potential tags. ~otential word tag assignment. In cases where more than one potential tag is assigned to the inpu~ word, the tags represent word classes of the word without taking the syntactic environmeat into account. A list of one to five word flnal characters, known as the 's~ffixlist', is used for assignment of appropriate word class tags to as many word types as possible. A list of full word forms, known as the 'wordlist', i& used for exceptions to the suffixlist, and, in addition, word forms that occur more than 50 times in the corpus are included in the wordlist, for speed of processing. The term 'suffixlist' is used as a convenient name, and the reader is warned that the list does not necessarily contain word final morphs; strings of between one and five word final characters are included if their occurrence as a gagged form in the Brown corpus merits it. ~e 'suffixlist' used by Greene and Rubin (op.cit.) was substantially revised and extended by Johansson and Jahr (1982) using reverse alphabetical lists of approximately 50,000 word types of the Brown Corpus and 75,000 word types of both Brown and LOB corpora. Frequency lists specifying the fre~uehcy of tags for word endings consistlng of 1 to 5 characters were used to establish the efficiency of each rule. Johansson and J~r were guided by the Longman Dictionary of Contemporary ~hglish (1978) and other dictionaries and grammars including ~/irk, Greenbaum, Leech and ~art-vik (1972) in identifying tags for each item in the wordlist. For the version used for Lancaster-Oslo/BerEen word tagging (1985), the suffixlist was expanded to about 7~90 strings of word final characters, the wordlist consisted of about 7,000 entries and a total of 135 word tag types were used. Potential ~ag disambiguation. ~%e problem of resolving lexical ambiguity for the large proportion of English words that occur in more than one word class, (BLOW, CONTACT, HIT, LEFT, RA2~, RUN, REFUSE, RDSE, 'dALE, WATCH ...), is solved, whenever possible by examining the local context. '~rd tag selection for homographs in Greene a~d Rubin (op. cir.) was attempted by using 'context frame rules', an ordered list of 5,300 rules designed to take into account the tags assigned to up to two words preceding or following the ambiguous homograph. ~3~e program was 77 per cent successful but several errors were due to appropriate rules being blocked when adjacent ambi~lities were encountered (Marshall, 1983: 140). Moreover, about 80 per cent of rule application took just one immediately neighbouring tag into account, even though only a quarter of the context frame rules specified only one immediately neighbouring tag. To overcome these difficulties, research associates at Lancaster have devised a transition probability matrix of tag pairs to compute the most probable 294 tag for an ambiguous form given the immediately preceding and following tags. ~his method of calculating one-step transition probabilities is suitable for disambiguating strings of ambiguously tagged words because the most likely path through a string of ambiguously tagged words can be calculated. The likelihood of a tag being selected in context is also influenced by likeli- hood markers which are assigned to entries with more than one tag in the lists. Only two markers, '@' and '%', are used, '@' notionally Ludicat~ng that the tag is correct for the associated form less than 1 in lO occasions, '%' notionally indicating that the tag occurs less than 1 in lOO occasions. The word tag disambiguation program uses these markers to reduce the probability of the less likely tags occurring Lu context; '@' results in the probability being halved, '%' results in the probability being divided by eight. Hence tags marked with '@' or '%' are only selected if the context indicates that the tag is very likely. Error analysis. At several stages during design and implementation of the tagging software, error analysis was used to improve various aspects of the word tagging system. Error statistics were used to amend the lists, the transition matrix entries and even the formula used for calculating transition probabilities (originally this was the frequency of potential tag A followed by potential tag B divided by the frequency of A. Subsequently, it was changed to the frequency of A followed by B divided by the product of the frequency of A and the frequency of B (Marshall, 1983: l~w~ff)). Error analysis indicated that the one- step transition method for word tag disambiguation was very successful, but it was evident that further gains could be made by including a separate list of a small set of sequences of words such as accordin~ to, as well as, and so as to which were retagged prior to word tag disambigu.~ t ior~. Another modification was to include an algorithm for altering the values of sequences of three tags, such as constructions with an intervening adverb or simple co-ordinated constructions such that the two words on either side of a co-ordinating conjunction contained the same tag where a choice was available. No value in the matrix was allowed to be as little as zero, by providing a minimum positive value for even extremely unlikely tag co-occurrences; this allowed at least some kind of analysis for unusual or eccentric syntax and prevented the system from grinding to a halt when confronted with a construction that it did not recognize. Once these refinements to the suite of word tagging programs were made, the corpus was word-tagged. It was estimmted that the number of manual post-editing interventions had been reduced from about 230,000 required for word tagging of the Brown corpus to about 35,000 required for the IDB corpus (Leech, Garside and Atwell, 1983: 36). The method achieves far greater consistency than could be attained by a human, were such a person able to labour through the task of attributing a tag to every word token in the corpus. A record of decisions made at the post- editing stage was kept for the purpose of recording the criteria for judging whether tags were considered to be correct or not (Atwell, 1982b). Improving word tagging. Work currently being undertaken at Lancaster includes revising and extending the word tag set and improving the suite of programs and data files required to carry out automatic word tagging. Revision of the word tag set. The word tag set is being revised so that, whenever possible, tags are mnemonic such that the characters chosen for a tag are abbreviations of the grammatical categories they represent. This criterion for word tag improvement is solely for the benefit of human intelligibility and in some cases, because of conflicting criteria of distinctiveness and brevity, it is not always possible to devise clearly mnemonic tags. For instance, nouns and verbs can be unequivocally tagged by the first letter abbreviations 'N' and 'V', but the same cannot be said for articles, adverbs and adjectives. These categories are represented by the tags 'AT', 'RR', and 'JJ'. It was decided, on the grounds of improving mnemonicity, to change representation of the category of number in the tag set. In the old tag set, singular forms of articles, determiners, pronouns and nouns were unmarked, and plural forms had the same tags as the singular forms but with 'S' as the end character denoting plural. As far as mnemonicity is concerned, this is confusing, especially to someone uninitiated in the refinements of LOB tagging. In the new tag set, number is 295 now marked by having 'I' for singular forms, 'P' for plural forms and no number character for nouns, articles and determiners which exhibit no singular or plural morpLolo~ical distJnctJveaess (COD, A~ is d~siralC,_e, both for the purposes of human intelligibility and for mechanical processing, to make the tagged system as hierarchized as possible. In the old tag set m,xial verbs, and forms of the verbs BE, DO and HAVE were tagged as 'r~,'', 'B", 'D", and 'H" (where ''' ~epresents any of the characters used for these tags denoting sub~lasses of each tag class). In the new word tag set, these have been recoded 'V~,~'', 'VB'', 'VD'', 'V~", to show that ~hey are, ilt fact, verbs, and to Cacilitate verb couni.inE in a f~equency ~nalysis of the t_agged corpus; "4"I'' is I:he new tag for" ] exical verbs. It has been taken as a design principle of the new tag set that, wherever possible, subc_~.teEories and supercat~gories should be retrieved by referrin E to the zhara<-ter position in [:,he string of characters ::taking up a tag, major word class Codin~ beir~ denoted by the initial character(s) nf the tag and subsequent charactel.s denoting morpho-syntactic subcateEor~ ~s. Kierarchization of the new tee set is best e×e~:'pIi fied by prcnnuns. 'P'' is a pronoun, .~s distinct from other ta~ initial characters, s~,~h as "~:'' for noun, 'V'' fo]' verb a/~d so on. 'PP'' ~s a personal pronoun, ~s distinct from '~:'' ~n indefinite pronoun; '~?I'' is a first persnn personal pronoun: ~, we, us, as distinct fr'om 'Plm/. °' , I{ ~'v ~.n--d" ';PX" which a~'e second, third person and r~flex~ve l~ronouI~s; '~'~'IS" is a fib-st pezso:t s:~b~ect p~rsonal prortourl: I and we, 8s distinct from fi~'s ~ person o~-ject l~r.~ons] pronouns, :~e, af~ ,:~s,_Ts denote~i by ';PIO" ' ; finally "r!~pISl : the first person si~l] ar subject personal pronoun, _I (~he colon is used tc show that the form mus~ have an .-:xtitial capital letter). ~e thir, l cril:erion for revising and enlarging the word tag set is to improve ~nd extend the linguistic cateEorisation. For. instance, a tag for the category of predi~:ative addectJve, 'JA', has been introduced fo1" ad~e~-tives like ablaze, adrift and afloat, in addition Uo the ~ y ex-:~dist~ction between attributive and ordinaz~ adjectives, marked 'JB' as distinct from 'JJ'. There is a~ essential distributional restriction on subclasses of adjectives occurring only attributively or predicatively, and it was considered appropriate t~notate this in the tag set in a consistent manner. The attributive category has been introduced for comparative adjectives, 'JBR', (bq=PER, ~;T~ ...) and superlative adjectives, 'JBT', (U~OST, UTTEI~OST ... ). As a further example of improving the linguistic categorization without affecting the proportion of correctly tagged word forms, consider the word ONE. In the old tagging system, this word was always assigned the tag 'CDI'. This is unsatisfactory, even though ~TE is always assigned the tag it is supposed to receive, because O~FE is not simply a singular cardinal number. It can be a sin~llar impersonal pronoun, One is often s~rised by the reaction of ~ ~ s~, or a sinEul-ar" ~mm-~ ~ , We ~ts--~S contrasting, for instance, w-'~h-'~ al form He wants those ones. It is ~herefore approprl'~e f-To'~ ~,~C~,~o be assigned 5 potential tags, 'CDI', '~TI', and '~TNI', one of which is to be selected by the transition probability procedure. Revision of the programs and data files. Revision of the word tag set has necessitated extensive revision of the word- and suffixlists. The transition matrix will be adapted so that the corpus can be retagged with tags from the new word tag set. In addition, programs are being revised to reduce the need for special pre-editing and input format requirements. In this way, it will be possible for th~ system to tag ~glJsh tex~s or:her than the LOB corpus without pre-edJ ring. Reducing Pre-editing. For the 1983 version of the ta~ged corpus, a pre-editin E stage was carried out partly by computer and partly by a h,~man pre-editor (Atwell, 1982a). As part of this stage, the computer automatically reduced all sentence-initial capital letters and the hum~ pre-editor recapit- alizsd those sentence initial characters that began proper nouns. We are now endeavourin E to cut out this phase so that the automatic tagg~n E suite can process inp, xt text in its normal orthographic form as mixed case characters. Eentence boundaries were explicitly • ~arked, an part of thp input ~eq~:irements ::o the tag~.in~ procedures, and since the word class of a word with an initial capital letter is significantly affected by whether it occurs at the beginning of a sentence, it was considered appropriate to make both sentence boundary recognition and word class assignment of words with a word init.ial capital automatic. All entries in the 296 word list now appear entirely in lower case and words which occur with different tags according to initial letter status (board, march, may, white ...) are assigned tags accordzng t~---"o a field selection procedure: the appropriate tags are given in two fields, one for the initial upper case form (when not acting as the standard beginning-of-sentence marker) and the other for the initial lower case form. The probability of tags being selected from the alternative lists is weighted according to whether the form occurs at the beginning of the sentence or elsewhere. Knut Hofland estimated a success rate of about 9a.3 per cent without pre-editing (Leech, Garside and Atwell, 1983: 36). Hence, the success rate only drops by about 2 per cent without pre-editing. Nevertheless, the problems raised by words with tags varying according to initial capital letter status need to be solved if the system is to become completely automatic and capable of correct tagging of standard text. Constituent ;alalysis. The high success rate of word tag selection achieved by the one-step probability disambiguation procedure prompted us to attempt a similar method for the more complex tasks of phrase and clause tagging. The paper by Garside and Leech in this volume deals more fully with this aspect of the work. Rules and symbols for providing a constituent analysis of each o£ the sentences in the corpus are set ~t in a Case-law Manual (Sampson, 198~) and a series of associated documents give the reasoning for the choice of rules and symbols (Sampson, 1983 - ). Extensive tree drawing was ,mdertaken while the Case-Law ~anual was beinz written, partly to establish whether high-level tags and rules for hig~h-level tag assignment needed to be modified in the light of the enormous variety and complexity of ordinary sentences in the corpus, and partly to create a databank of manually parsed samples of the LOB corpus, for the purposes of providing a first- approximation of the statistical data required to disambiguate alternative parses. To date, about 35,O00 words (I,500 sentences) have been manually parsed and keyed into an ICL ~/E 2900 machine. W~ are presently aimin~ for a tree bank of about 50,0OO words of evenly distributed samples taken from different corpus categories r,presenting a cross-section of about 5 per cent of the word tagged c or!m~ s. The future. It should be made clear to the reader that several aspects of the research are cumulative. For instance, the statistics derived from the tagged Brown corpus were used to devise the one-step probability program for word tag disambiguation. Similarly, the word tagged LOB corpus is taken as the input to automatic parsing. At present, we are attempting to : provide constituent structures for the LOB corpus. Many of these constructions are long and complex; it is notoriously difficult to summarise the rich variety of written ~hg!ish, as it actually occurs in newspapers and books, by using a limited set of rewrite rules. Initially, we are attempting to parse the LOB corpus using the statistics provided by the tree bank and subsequently, after error analysis and post-editing, statistics of the parsed corpus can be used for further research. ACKNOWI/~GI~E~TS The work described by the author of this paper is currently supported by Science and ~h~ine~r~ug Research Council Grant GRICI~7700. ~ C E S Abbreviation : ICAME _- International Computer Archive of Modern ~hglish. Atwell, E.S. (1982a). LOB Corpus Ta~in~ Project: Manual Pr~'/%-dit Handbook. Unpub lishe~--~ent : Unit for Computer Research on the ~hglish Language, University of lancaster. (1982b). LOB ~rpus Taggin~ Project: Manual Po s--~- e~-f-~andb oo k. m ~ - grammar of LOB Corpus English, examining the types of error commonly made during automatic (computational) analysis of ordinary written English). Unpublished document : Unit for Computer Research on the ~hglish language, University of lancaster. Francis, W.N. (1980). 'A tagged corpus - problems and prospects', in Studies in ~hglish lin~listics for Randolph ~1980) edited by S-~-'Greenbaum, G.N~ech and J. S~arrvik, 192-209. London : Longman. Greene, B.B. and Rubin, G.M. (1971). 'Automatic Grammatical Tagging of English', Providence, R.I. : Department of Linguistics, Brown University. 297 Hauge, J. and Hofland, K. (1978). ~ticrofiche version of the Brown UniversityCo rpus oi'~Pr--~ent--~y American ~n~-l-~. ]~rgen:'-e~"~'4~s EDB- Senter for Humanistisk Forskning. Jespersen, O. (1909-A9). A Modern ~hElish Grammar on Historical ~r~c~es, F~un_ks g a ar~. Johansson, S. (1982) (editor). Computer Corpora in ~hElish language research. Bergen: -~orwegian Computing Centre for the Humanities. Johansson, S. and Jahr, M-C. (1982). 'Grammatical Tagging of the LOB Corpus: Predicting Word Class from Word ~hdings', in S. Johansson (1982), ll8- Johansson, S., Leech, G. and Goodluck, H. (1978). Manual of information to ac c omp any-'-~-~c as ter-Os lo/Be'-r~en ~ o£ r~tish Eaglish, for use with i computers. Unpublish-~ d-~u~ent : Department of English, University of Oslo. Kruisinga, E. and Erades, P.A. (1911). An ~hElish Grammar. Nordhoof. Kuc'~a, H. and Francis, W.N. (196A, revised 1971 and 1979). Manual of Information to accompany A~a-'rd of Pro-sent-Day Rii~ed American or use witR Comouters.---~r~-'~de-~, R~ode Island: Brown University Press. Leech, G.N., Garside, R., and Atwell, E. (1983). 'Recent Developments in the us~ of Computer Corpora in English Language Research', Transactions of the Philological Society, 23-aO. ~s DictionaIT/ of Cmntemporary ~h~lish ). London'S- Longman. Marshall, I. (1983). 'Choice of Grammatical Word-Class without Global ~/ntactic Analysis: Tagging Words in the LOB Corpus', Computers and the Humanities, Vol. 17, No. 3, 139-150. Quirk, R., Greenbatu~, S., Leech., G.N. and S~arrvik, J. (1972). A Grammar of Con~emporar~ ~hslish. LondOn: Longing. (1985). A Comprehensive Grammar of the ~h~lish rangua~e. London : Longman. Sampson, G.R. (198A). UCR~, Symbols and l~les for Manual Tree--~aw~n~. ~-~l~-~e~'-~en~: Unit ~or Computer Research on the English Language, University of Lancaster. (1983 -). Tree Notes I - XIV. Unpublished documents: Unit for Computer Research on the Hhglish Languace, University of Lancaster. 298
1985
36
EXTRACTING SEMANTIC HIERARCHIES FROM A LARGE ON-LINE DICTIONARY Martin S. Chodorow Department of Psychology, Hunter College of CUNY and I.B.M. Thomas J. Watson Research Center Yorktown Heights, New York 10598 Roy J. Byrd George E. Heidorn I.B.M. Thomas J. Watson Research Center Yorktown Heights, New York 10598 ABSTRACT Dictionaries are rich sources of detailed semantic infor- mation, but in order to use the information for natural language processing, it must be organized systematically. This paper describes automatic and semi-automatic procedures for extracting and organizing semantic fea= ture information implicit in dictionary definitions. Two head-finding heuristics are described for locating the genus terms in noun and verb definitions. The assump- tion is that the genus term represents inherent features of the word it defines. The two heuristics have been used to process definitions of 40,000 nouns and 8,000 verbs, producing indexes in which each genus term is associated with the words it defined. The Sprout pro- gram interactively grows a taxonomic "tree" from any specified root feature by consulting the genus index. Its output is a tree in which all of the nodes have the root feature for at least one of their senses. The Filter pro- gram uses an inverted form of the genus index. Filtering begins with an initial filter file consisting of words that have a given feature (e.g. [+human]) in all of their senses. The program then locates, in the index, words whose genus terms all appear in the filter file. The out- put is a list of new words that have the given feature in all of their senses. 1. Introduction. The goal of this research is to extract semantic informa- tion from standard dictionary definitions, for use in constructing lexicons for natural language processing systems. Although dictionaries contain finely detailed semantic knowledge, the systematic organization of that knowledge has not heretofore been exploited in such a way as to make the information available for computer applications. Amsler(1980) demonstrates that additional structure can be imposed upon a dictionary by making certain as- sumptions about the ways in which definitions are con- structed. Foremost among these assumptions is that definitions consist of a "genus" term, which identifies the superordinate concept of the defined word, and "differentia" which distinguish this instance of the superordinate category from other instances. By manu- ally extracting and disambiguating genus terms for a pocket dictionary, Amsler demonstrated the feasibility of generating semantic hierarchies. It was our goal to automate the genus extraction and disambiguation processes so that semantic hierarchies could be generated from full-sized dictionaries. The fully automatic genus extraction process is described in Section 2. Sections 3 and 4 describe two different disambiguation and hierarchy-extraction techniques that rely on the genus information. Both of these techniques 299 are semi-automatic, since they crucially require decisions • to be made by a human user during processing. Never- theless, significant savings occur when the system or- ganizes the presentation of material to the user. Further economy results from the automatic access to word de- finitions contained in the on-line dictionary from which the genus terms were extracted. The information extracted using the techniques we have developed will initially be used to add semantic infor- mation to entries in the lexicons accessed by various natural language processing programs developed as part of the EPISTLE project at IBM. Descriptions of some of these programs may be found in Heidorn, et al. (1982), and Byrd and McCord(1985). 2. Head finding. In the definition of car given in Figure 1, and repeated here: car : a vehicle moving on wheels. the word vehicle serves as the genus term, while moving on wheels differentiates cars from some other types of vehicles. Taken as an ensemble, all of the word/genus pairs contained in a normal dictionary for words of a given part-of-speech form what Amsler(1980) calls a "tangled hierarchy". In this hierarchy, each word would constitute a node whose subordinate nodes are words for which it serves as a genus term. The words at those subordinate nodes are called the word's "hyponyms". Similarly, the words at the superordinate nodes for a given word are the genus terms for the various sense definitions of that word. These are called the given word's "hypernyms". Because words are ambiguous (i.e.. have multiple senses), any word may have multiple hypernyms; hence the hierarchy is "tangled". Figure I shows selected definitions from Webster's Sev- enth New Collegiate Dictionary for vehicle and a few re- lated words. In each definition, the genus term has been italicized. Figure 2 shows the small segment of the tan- fled hierarchy based on those definitions, with the hyponyms and hypernyms of vehicle labelled. vehicle: (n) (often attrib) an inert medium in which a medicinally active agent is ad- ministered vehicle: (n) any of various other media acting usu. as solvents, carriers, or binders for ac- five ingredients or pigments vehicle: (n) an agent of transmission : CARRIER vehicle: (n) a medium through which something is expressed, achieved, or displayed vehicle: (n) a means of carrying or transporting something : CONVEYANCE vehicle: (n) a piece of mechanized equipment ambulance: (n) a vehicle equipped for transport- ing wounded, injured, or sick persons or animals bicycle: (n) a vehicle with two wheels tandem, a steering handle, a saddle seat, and pedals by which it is propelled car: (n) a vehicle moving on wheels tanker: (n) a cargo boat fitted with tanks for car- rying liquid in bulk tanker: (n) a vehicle on which a tank is mounted to carry liquids: also : a cargo airplane for transporting fuel Figure 1. Selected dictionary definitions. Our automated mechanism for finding the genus terms is based on the observation that the genus term for verb and noun definitions is typically the head of the defining phrase. This reduces the task to that of finding the heads of verb phrases and noun phrases. 300 medium means agent equipment vehicle',~ boat airplane I ambulance bicycle car tanker Figure 2. The tangled hierarchy around "vehicle". The syntax of the verb phrase used in verb definitions makes it possible to locate its head with a simple heuristic: the head is the single verb following the word to. If there is a conjunction of verbs following to, then they are all heads. Thus, given the following two defi- nitions for winter. winter: (v) to pass the winter winter: (v} to keep. feed. or manage during the winter the heuristic would find four heads: pass. keep, feed, and manage. Applying this heuristic to the definitions for the 8,000 verbs that have definitions in Webster's Seventh showed that 2225 distinct verbs were used as heads of defi- nitions and that they were used 24,000 times. In other words, each genus term served as the hypernym for ten other verbs, on average. The accuracy of head finding for verbs was virtually 100 percent. Head finding is much more complex for noun definitions because of their ffeater variety. At the same time, the magnitude of the task (over 80,000 defining noun phrases) demanded that we use a heuristic procedure, rather than a full parser, which would have been pro- hibitively expensive. We were able to take advantage of the fact that dictionary definitions are written in a special and predictable style, and that their analysis does not require the full power of an analyzer for general English. The procedure used may be briefly described as follows. First the substring of the definition which must contain the head is found. This substring is bounded on the left by a word which obligatorily appears in prenominal po- sition: a, an, the, its, two, three ..... twelve, first, second, ... It is bounded on the right by a word or sequence that can only appear in postnominal position: • a relative pronoun (introducing a relative clause) • a preposition not followed by a conjunction (thus. introducing a complement to the head noun) • a preposition-conjunction-preposition configuration (also introducing a complement) • a present participle following a noun (thus, intro- ducing a reduced relative clause) The heuristic for finding the boundary on the right works because of certain restrictions on constituents appearing within a noun phrase. Emends (1976, pp. 167-172} notes that an adjective phrase or a verb phrase must end with its phrasal head if it appears to the left of the head noun in a noun phrase. For example, in the very old man, the adjective phrase very old has its head ad- jective in final position; in the quietly sleeping children, the verb phrase quietly sleeping ends in its head verb. Another constraint, the Surface Recursion Restriction (Emends, 1976, p. 19), prohibits free recursion of a node appearing within a phrase, to the left of the phrase head. This prevents prenominal modifying phrases from containing S and PP nodes. Taken together, the two restrictions specify that S, PP, and any other constituent which does not end in its head-of-phrase element cannot appear as a prenominal modifier and must, therefore, be postnominal. Lexical items or sequences that mark the beginnings of these constituents are used by the heuristic to establish the right boundary of the substring which must contain the head of the noun definition. Once the substring is isolated, the search for the head begins. Typically, but not always, it is the rightmost noun in the substring. If however, the substring contains a conjunction, each conjunct is processed separately, and multiple heads may result. [f the word found be- longs to a small class of "empty heads" (words like one, any, kind. class, manner, family, race. group, complex, etc.) and is followed by of, then the string following of is reprocessed in an effort to locate additional heads. 301 Applying this procedure to the definitions for the 40,000 defined nouns in Webster's Seventh showed that 10,000 distinct nouns were used as heads of definitions and that they were used 85,000 times. In other words, each genus term served as the hypernym for 8.5 other verbs, on average. The accuracy of head-finding for nouns was approximately 98 percent, based on a random sample of the output. 3. Sprouting Sprouting, which derives its name from the action of growing a semantic tree from a specified root, uses the results of head-finding as its raw material. This infor- mation is organized into a "hyponym index", in which each word that was used as a genus term is associated with all of its hyponyms. Thus, "vehicle" woula have an entry which reads (in part): vehicle: ambulance ... bicycle ... car ... tanker ... For a given part-of-speech, the hyponym index needs to be built only once. When invoking the sprouting process, the user selects a root from which a semantic tree is to be grown. The system then computes the transitive closure over the hyponym index, beginning at the chosen root. In effect, for each new word (including the root), all of its hyponyms are added to the tree. This operation is ap- plied recursively, until no further new words are found. The interactiveness of the sprouting process results from the fact that the user is consulted for each new word. If he decides that that word does not belong to the tree being grown, he may prune it (and the branches that would emerge from it). These pruning decisions result in the disambiguation of the tree. The user is assisted in making such decisions by having available an on-line version of Webster's Seventh, in which he may review the definitions, usage notes, etc. for any words of which he is unsure. The output of a sprouting session, then, is a disambiguated tree extracted from the tangled hierarchy represented by the hyponym index. Actually, the output more nearly resembles a bush since it is usually shallow (typically only 3 to 4 levels deep) and very wide. For example, a tree grown from vehicle had 75 direct de- scendants from the root, and contained over 250 nodes in its first two levels, alone. The important aspect of the output, therefore, is not its structure, but rather the fact that the words it contains all have at least one sense which bears the property for which the root was ori- ginally selected. It is important to note that any serious use of sprouting to locate all words bearing a particular semantic feature must involve the careful selection and use of several roots, because of the variety of genus terms employed by the Webster's lexicographers. For example, if it were desired to find all nouns which bear the [+female] inherent feature, sprouts should at least be begun from female, woman, girl and even wife. 4. Filtering Filtering, like sprouting, results in lists of words bearing a certain property (e.g., [+human]). Unlike sprouting. however, filtering only picks up words all of whose senses have the property. It is based on a "hypernym index" (the inversion of the hyponym index), in which each word is listed with its hypernyms, as in the example given here: vehicle: agent equipment means medium The filtering process begins with a "seed filter" consist- ing of an initial set of words all of whose senses bear some required property. The seed filter may be obtained in any manner that is convenient. [n our work, this may be either from the semantic codes assigned to words by the Longman Dictionary of Contemporary English, or from morphological processing of word lists, as de- scribed in Byrd and McCord (1985). For example. morphological analysis of words ending in -man, -sman, -ee, -er, and -ist constitute a rich source of [+human] nouns. Given the filter, the system uses it to evaluate all of the words in the hypernym index. Any words, all of whose hypernyms are already in the filter, become candidates for inclusion in the filter during the next pass. The user is consulted for each candidate, and may accept 302 pass# FilterSize New Words 1 2539* 1091 2 4113"* 234 3 4347 43 4 4390 0 5 4661"** 49 Total 4710 * Obtained from Longman Dictionary of Contempory English ** Includes 483 new words from morphological analysis *** Includes 271 new words from morphological analysis Figure 3. A Filtering of [+human] nouns. or reject it. Finally, all accepted words are added to the filter, and the process is repeated until it converges. An example of the filtering procedure applied to nouns bearing the [+human] inherent feature is given in Figure 3. It can be seen that the process converges fairly quickly, and that it is fairly productive, yielding, in this case, an almost two-for-one return on the size of the in- itial filter. For nouns with the [-human] inherent fea- ture, an initial filter of 22,000 words yielded I 1,600 new words on the first pass, with a false alarm rate of less than 1% based on a random sample of the output. From an initial filter of 15 [+time] nouns, 300 additional ones were obtained after three passes through the filter. These examples demonstrate another important fact about filtering: that it can be used to project the seman- tic information available from a smaller, more manage- able source such as a learner's dictionary onto the larger set of words obtained from a collegiate sized dictionary. • As does sprouting, filtering also produces a list of words having some desired property. In this case, however, the resulting words have the property in all of their senses. This type of result is useful in a parsing system, such as the one described in Heidorn, et al. (1982), in which it may be necessary to know whether a noun must refer to a human being, not merely that it may refer to one. 5. Conclusion This work receives its primary motivation from the de- sire to build natural language processing systems capable of processing unrestricted English input. As we emerge from the days when hand-built lexicons of several hun- dred to a few thousand entries were sufficient, we need to explore ways of easing the lexicon construction task. Fortunately, the tools required to do the job are becom- ing available at the same time. Primary among them are machine readable dictionaries, such as Webster's and Longman, which contain the raw material for analysis. Software tools for dictionary analysis, such as those de- scribed here and in Calzolari (1982), are also gradually emerging. With experience, and enhanced understand- ing of the information structure in published diction- 3O3 aries, we expect to achieve some success in the automated construction of lexicons for natural language processing systems. References Amsler, R. A. (1980), The Structure of the Merriam. Webster Pocket Dictionary, Doctoral Dissertation, TR-164, University of Texas, Austin. Byrd, R. J. and M. C. McCord (1985), "The lexical base for semantic interpretation in a Prolog parser" presented at the CUNY Workshop on the Lexicon, Parsing, and Semantic Interpretation, 18 January 1985. Caizolari, N. (1984), "Detecting Patterns in a Lexical Data Base," Proceedings of COI.JNG/ACL-1984 Emonds, J.E. (1976), A Transformational Approach to English Syntax. New York: Academic Press. Heidorn, G. E., K. Jensen, L. A. Miller, R. J. Byrd, and M. S. Chodorow (1982), "The EPISTLE Text- Critiquing System," IBM Systems Journal, 21,305-326. Longman Dictionary of Contemporary English (1978), Longman Group Limited, London. Webster'$ Seventh New Collegiate Dictionary (1963), G. & C. Merriam, Springfield, Massachusetts. 3O4
1985
37
DICTIONARIES OF THE HIND George A. Miller Department of Psychology Princeton University Princeton, NJ 08544, USA ABSTRACT How lexical information should be formulated, and how it is organized in computer memory for rapid retrieval, are central questions for computational linguists who want to create systems for language understanding. How lexical knowledge is acquired, and how it is organized in human memory for rapid retrieval during language use, are also central questions for cognitive psycholo- gists. Some examples of psycholinguistic research on the lexical component of language are reviewed with special atten- tion to their implications for the compu- tational problem. INTRODUCTION I would like to describe some recent psychological research on the nature and organization of lexical knowledge, yet to introduce it that way, as research on the nature and organization of lexical knowledge, usually leaves the impression that it is abstract and not very practical. But that impression is pre- cisely wrong; the work is very practical and not at all abstract. So I shall take a different tack. Computer scientists -- those in ar- tificial intelligence especlally -- some- times introduce their work by emphasizing its potential contribution to an under- standing of the human mind. I propose to adopt that strategy in reverse: to intro- duce work in psychology by emphasizing Its potential contribution to the devel- opment of information processing and communication systems. We may both be wrong, of course, but at least this strategy indicates a spirit of coopera- tion. Let me sketch a general picture of the future. You may not share my expec- tations, but once you see where I think events are leading, you will understand why I believe that research on the nature and organization of lezical knowledge is worth doing. You may disagree, but at least you will understand. Some Technological Assumptions I assume that computers are going to be directly linked by communication net- works. Even now, in local area networks, a workstation can access information on any disk connected anywhere in the net. Soon such networks will not be locally restricted. The model that is emerging is of a very large computer whose parts are geographically distributed; large corporations, government agencies, uni- versity consortia, groups of scientists, and others who can afford it will be working together in shared information environments. For example, someday the Association foe Computational Linguistics will maintain and update an exhaustive knowledge base immediately accessible to all computational linguists. Our present conception of computers as distinct objects will not fade away -- the local workstation seems destined to grow smaller and more powerful every year -- but developments in networking will allow users to think of their own work- stations not merely as computers, but as windows into a vast information space that they can use however they desire. Most of the parts needed for such a system already exist, and fiber optic technology will soon transmit broadband signals over long distances at affordable costs. Putting the parts together into large, non-local networks is no trivial task, but it will happen. Computer scientists probably have their own versions of this story, but no special expertise is required to see that rapid progress lies ahead. Moreover, this development will have implications for cognitive psychology. However the technological implementation works out, at least one aspect raises questions of considerable psychological interest: in particular, how will people use it? What kind of man-machine interface will there be? 305 What might lie "beyond the key- board," as one futurist has put it (Bolt, 1984), has been a subject for much crea- tive speculation, since the possibilities are numerous and diverse. Although no single interface will be optimal for every use, many users will surely want to interact with the system in something reasonably close to a natural language. Indeed, if the development of information networks is to be financed by those who use them, the interface will have to be as natural as possible -- which means that natural language processing will be a part of the interface. Natural Language Interfaces Natural language interfaces to large knowledge bases are going to become gen- erally available. The only question is when. How long will it take? Systems already exist that converse and answer questions on restricted topics. How much remains to be done? Before these systems will be gener- ally useful, three difficult requirements will have to be met. An interface must: (1) have access to a large, general-pur- pose knowledge base; (2) be able to deal with an enormous vocabulary~ (3) be able to reason in ways that human users find familiar. Other features would be highly desirable (e.g., automatic speech recog- nition, digital processing of images, spatially distributed displays of infor- mation), but the three listed above seem critical. Requirement (I) will be met by the creation of the network. How a user's special interests will shape the organ- ization of his knowledge base and his locally resident programs poses fascin- ating problems, but I do not understand them well enough to comment. I simply assume that eventually every user can have at his disposal, either locally or remotely, whatever data bases and expert systems he desires. Requirement (3), the ability to draw inferences as people do, is probably the most difficult. It is not likely to be "solved" by any single insight, but a robust system for revising belief struc- tures will be an essential component of any satisfactory interface. I believe that psychologists and other cognitive scientists have much to contribute to the solution of this problem, but the most promising work to date has been done by computer scientists. Since I have little to say about the problem other than how difficult it is, I will turn instead to requirement (2), which seems more trac- table. THE VOCABULARY PROBLEM Giving a system a large vocabulary poses no difficulty in principle. And everyone who has tried to develop systems to process natural language recognizes the importance of a large vocabulary. Thus, the vocabulary problem looks like a good place to start. The dimensions of the problem are larger than might be expected, however, so there has been some disagreement about the best strategy. If, in addition to understanding a user's queries, the system is expected to understand all the words in the vast knowledge base to which it will have access, then it should probably have on the order of 250,000 lexical entries: at 1,000 bytes/entry (a modest estimate), that is 250 megabytes. Since standard dictionaries do not contain many of the words that are printed in newspapers (Walker & Amsler, 1984), another 250,000 megabytes would probably be required for proper nouns. Since I am imagining the future, however, I will assume that such large memories will be available inex- pensively at every user's workstation. It is not memory size per se that poses the problem. The problem is how to get all that information into a computer. Even if you knew how the information should be repre- sented, a good lexical entry would take a long time to write. Writing 250,000 of them is a daunting task. No doubt there are many exciting projects that I don't happen to know about, but on the basis of my perusal of the easily accessible literature there seem to he two approaches to the vocabu- lary problem. One uses a machine-read- able version of some traditional diction- ary and tries to adapt it to the needs of a language processing system. Call this the "book" approach. The other writes iexical entries for some fragment of the English lexicon, hut formulates those en- tries in a notation that is convenient for computational manipulation. Call this the "demo" approach. The book approach has the advantage of including a large number of words, but the information with each word is diffi- cult to use. The demo approach has the advantage that the information about each word is easy to use, but there are usual- ly not many words. The real problem, therefore, is how to combine these two approaches: how to attain the coverage of a traditional dictionary in a computa- tionally convenient form. 306 Q The Book Approach If you adopt the book approach, what you want to do is translate traditional dictionary entries into a notation that makes evident to the machine the morpho- logical, syntactic, semantic, and prag- matic properties that are needed in order to construct interpretations for senten- ces. Since there are many entries to be translated, the natural solution is to write a program that will do it automa- tically. But that is not an easy task. One reason the translations are dif- ficult is that synonyms are hard to find in a conventional dictionary. Alpha- betical ordering is the only way that a lexicographer who works by hand can keep track of his data, but an alphabetical order puts together words with similar spellings and scatters haphazardly words with similar meanings. Consequently, similar senses of different words may be written very differently; they may be written at different times and even by different people. (For example, compare the entries for the modal verbs 'can,' 'must,' and 'will' in the Oxford English Dictionary.) Only a very smart program could appreciate which definitions should be paraphrases of one another. Another reason that the translations are difficult is that lexicographers are fond of polysemy. It is a mark of care- ful scholarship that all the senses of a word should be distinguished; the more careful the scholarship, the greater the number of distinctions. When dictionary entries are taken literally the results for sentence inter- pretation are ridiculous. Consider an example. Suppose the language processor is asked to provide an interpretation for some simple sentence, say: "The boy loves his mother." And imagine it has available the text of Merriam-Webster's Ninth New Colleoiate D ~ . Ignoring sub-senses: "the" has 4 senses, "boy" has 3, "love" has 9 as a noun and 4 as a verb, "his" has 2 entries, and "mother" has 4 as a noun, 3 as an ad- jective, 2 as a verb. Such numbers invite calculation. If we assume the system has a parser able to do no more than recognize that "love" is a verb and "mother" is a noun, then, on the basis of the literal information in this dictionary, there are 4x3x4x2x4 - 384 candidate interpretations. This calcula- tion assumes minimal parsing and maximal reliance on the dictionary. Of course, no self-respecting parser would tolerate so many parallel interpretations of a sentence, but the illustration gives a feeling for how much work a good parser does. A-d all of it is done in order to "disambiguate" a sentence that nobody who knows English would consider to be the least ambiguous. : Synonymy and polysemy pose serious problems, even before we raise the ques- tion of how to translate conventional definitions into computationally useful notations. Any system will have to cope with synonymy and polysemy, of course, but the book approach to the vocabulary problem seems to raise them in acute forms, while providing little of the in- formation required to resolve them. With sufficient patience this approach will surely lead to a satisfactory solution, but no one should think it will be easy. The Vocabulary Matrix As presented so far, synonymy and polysemy appear to be two distinct prob- lems. From another point of view, they are merely two different ways of looking at the same problem. In essence, a conventional diction- ary is simply a mapping of senses onto words, and a mapping can be conveniently represented as a matrix: call it a vocab- ulary matrix. Imagine a huge matrix with all the words in a language across the top of the matrix, and all the different senses that those words can express down the the side. If a particular sense can be expressed by a word, then the cell in that row and column contains an entry; otherwise it contains nothing. The entry itself can provide syntactic information, or examples of usage, or even a picture -- whatever the lexicographer deems im- portant enough to include. Table 1 shows a fragment of a vocabulary matrix. Table i. Fragment of a Vocabulary Matrix Columns represent modal verbs; rows represent modal senses; 'E' in a cell means the word in that column can express the sense in that row. WORDS SENSES can may _ m u ~ ~ _ M i l 1 be able to E . . . be permitted to E E . . . be possible E E . . be obliged to . . E . certain to be . . E be necessary . . E expected to be . . E E 307 Several comments should be made about the vocabulary matrix. First, it should be apparent that any conventional dictionary can be repre- sented as a vocabulary matrix: simply add a column to the matrix for every word, and add a row to the matrix for every sense of every word that is given in the printed dictionary. (A lexical matrix can be viewed as an impractical w~y of printing a dictionary on a single, very large sheet of paper.) Second, entering such a matrix con- sists of searching down some column or across some row. So a vocabulary matrix can be entered either with a word or with a sense. Thus, one difference between conventional dicticnaries, which can be entered only with a word, and the dic- tionary in out mind, which can be entered with either words or senses, disappears when dictionaries are represented in this more abstract form. Third, if you enter the matrix with a sense and search along a row, you find all the words that express that sense. When different words express the same sense, we say they are g~iQ~ym~USo On the other hand, if you enter the matrix with a word and look down that column, you find all the different senses that that word can express. When one word can express two or more senses, we say that it is ambiguous, or ~ixsemglL~. Thus, the two great complications of lexical knowledge, synonymy and polysemy, are seen as complementary aspects of a single abstract structure= Finally, since the vocabulary matrix serves only to represent the mapping between the two domains, it is free to expand as new words, or new senses for familiar words, are added. Of course, the number of columns is relatively fixed by the size of the vocabulary, so the major degrees of freedom are in deciding what the senses are and how to represent them. The Demo Approach When the question is raised of what a computationally useful lexical entry should look like, it is time to shift from the book approach to the demo ap- proach, where serious attempts have been made to establish a conceptual notation in which semantic interpretations can be expressed for computational use. By "the demo approach" I mean the strategy of building a system to process language that is confined to some well defined content area. Since language processing is a large and difficult enterprise, it is sensible to begin by trying out one's ideas in a small way to see whether they work. If the ideas don't work in a limited domain, they certainly won't work in the unlimited domain of general discourse. The result of this approach has been a series of progressively more ambitious demonstra- tion programs. Among those who take this approach, two extremes can be distinguished. On the one hand are those who feel that syntactic analysis is essential and should be carried, if not to completion, then as far as possible before resorting to semantic information. On the other hand are those who prefer semantics-based processing and consider syntactic cri- teria only when they get in trouble. The difference is largely one of emphasis, since neither extreme seems willing to rely totally on one or the other kind of information, and most workers would probably locate themselves somewhere in the middle. Since I am concerned here with the lexical aspects of language comprehension, however, I shall look primarily at semantics-based processing. Vocabulary Size Most of these demos have small vo- cabularies. It is surprising how much you can do with 1,500 well chosen words; a demo with more than 5,000 words would be evidence of manic energy on the part of its creator. A few thousand lexical entries have been all that was required in order to test the ideas that the de- signer was interested in. The problem, of course, is that writing dictionary definitions is hard work, and writing them in LISP doesn't make it any easier. If you are satisfied with definitions that take five lines of code, then, obviously, you can build a much larger dictionary than if you try to cram into an entry all the different senses that are found in conventional dictionaries. But even with short definitions, a great many have to be written. If you want the language processor to have as large a vocabulary as the average user, you will have to give it at least i00,000 words. One way to get a feeling for how many words that is is to translate it into a rate of acquisition. Several years ago I looked at Mildred Templin's (1953) data that way. Templin measured the vocabulary size of children of average intelligence at 6, 7, and 8 years of age. In two years they acquired 28,300 - 13,000 = 15,300 words, which 308 averages out to about 21 words per day (Miller, 1977). Most people, when they hear that result, confess that they had no idea that children are learning new words at such a rapid rate. But the arithmetic holds just as well for computers as for children. If you want the language pro- cessor to have a vocabulary of 100,000 words, and if you are willing to spend ten years putting definitions into it, then you will have to put in more than 27 new definitions every day. How far from this goal are today's demos? The answer should be simple, but it's not. It is hard to tell exactly how many words these systems can handle. Definitions are usually written in terms of a relatively small set of semantic primitives, and the inheritance of properties is assumed wherever possible. The goal, of course, is to create an unambiguous semantic representation that can be used as input to an inferencing system, so the form of these representa- tions is much more important than their variety, at least in the initial experi- ments. In the hands of a clever program- mer, a few hundred semantic primitives can really do an enormous amount of work. Although it is often assumed that the fewer semantic primitives a system requires, the better it is, in fact there seems to be little advantage to keeping the number small. When the number of primitives is small, definitions become long permutations of that small number of different atoms (Miller, 1978). When the set of primitives gets too small, defini- tions become like machine code: the com- puter loves them, but people find them hard to read or write. C~Inlng Book and Demo How large a set of semantic primi- tives do we need? It is claimed that Basic English can express any idea with only 850 words, but that really cuts the vocabulary to the bone. The Dictionary of Contemporary Enalish~ which is very popular with people learning English as a second language, uses a constrained vocabulary of about 2,000 words (plus some specialized terms) to write its definitions. Using the L ~ as a guide, Richard Cullingford and I tried to estimate how much effort would be involved in creat- ing a computationally useful lexicon. Our initial thought was to write LISP programs for 2,000 basic terms, then use Cullingford's language processor (Cullingford, 1985) to translate all of the definitions into LISP. We quickly realized, however, that the 2,000 words are polysemous; different senses are used in different definitions. As a rough estimate, we thought 12,000 basic concepts might suffice. An examination of the ~ defi- nitions also indicated that a great deal of information might have to be added to the translated definitions. Many of the simpler conceptual dependencies (informa- tion required for disambiguation, as well as for drawing inferences; Schank, 1975) have to be included in the definitions. Each translated definition would have to be checked to see that all sense relations, predicate-argument structures, and selectional restrictions were explicit and correct, and a wide variety of pragmatic facts (e.g., that "anyhow" in initial position signals a change of topic) would probably have to be added. We have not undertaken this task. Not only would writing 12,000 defini- tions (and checking out and supple- menting 50,000 more) require a major commitment of time and energy, but we do not have Longman's permission to use their dictionary this way. I report it, not as a project currently under way, but simply as one way to think about the magnitude of the vocabulary problem. So the situation is roughly this: In order to have natural language interfaces to the marvellous information sources that will soon be available, one thing we must do is beef up the vocabularies that natural language processors can handle. That will not be an easy thing to accomplish. Although there is no principled reason why natural language processors should not have vocabularies large enough to deal with a any domain of topics, we are presently far from having such vocabularies on llne. THE SEARCH PROBLEM As we look ahead to having large vocabularies, we must begin to think more carefully about the search problem. In general, the larger a data base is, the longer it takes to locate some- thing in it. How a large vocabulary can be organized in human memory to permit retrieval of word meanings at conversa- tional rates is a fascinating question, especially since retrieval from the subjective lexicon does not seem to get slower as a person's vocabulary gets larger. The technical issues involved in achieving such performance with silicon 309 memories raise questions I understand only well enough to recognize that there are many possibilities and no easy an- swers. Instead of speculating about the computer, therefore, I will take a moment to marvel at how well people manage their large vocabularies. In the past fifteen years or so a number of cognitive psychologists have been sufficiently impressed by people's lexical skills to design experiments that they hoped would reveal how people do it. This is not the time to review all that research (see Simpson, 1984), but some of the questions that have been raised merit attention. Psychologists have considered two kinds of theories of lexical access, known as search theories and threshold theories. Search theories assume that a pas- sive trace is stored in the mental lexi- con and that lexical access consists of matching the stimulus to its memory rep- resentation. Preliminary analysis of the stimulus is said to generate a set of candidates, which is searched serially until a match is found. Threshold theories claim that each sense of every word ks an independent detector waiting for its features to occur. When the feature count for any sense gets above some threshold, that sense becomes conscious. Both kinds of theories can account for most of the experimental data, but not all of it -- which is unfortunate, since a clear decision in favor of one or the other might help to resolve the ques- tion of whether lexical access involves a serial processor with search and retrie- val, or a parallel processor with simple activation. Since the brain apparently uses slow and noisy components, something searching in parallel seems plausible, but such devices are not yet well under- stood. Accesslnq Ambiquous Words Some of the most interesting psycho- logical research on lexical access con- cerns how people get at the meanings of polysemous words. These studies exploit a phenomenon called priming: when a word in a given lexical domain occurs, other words in that domain become more acces- sible. For example, a person is asked to say, as quickly as possible, whether a sequence of letters spells an English word. If the word DOCTOR has just been presented, then NURSE will be recognized more rapidly than if the preceding word had been unrelated~ like BUTTER (Meyer & Schvaneveldt, 1971; Becket, 1980). The recognition of DOCTOR is said to prime the recognition of NURSE. This lexlcal decision task can be used to study polysemy if the priming word is ambiguous, and if it ks followed by probe words appropriate to its dif- ferent senses. For example, the ambiguous prime PALM might be followed on some occasions by BAND and on other occasions by TREE. The question ks whether all senses of a polysemous word are activated simultan- eously, or whether context can facili- tate one meaning and inhibit all others. Three explanations of the results of these experiments are presently in compe- tition. Context dependent access--Only the sense that is appropriate to the context is retrieved or activated. Ordered access--Search starts with the most frequent sense and continues serially until a sense ks found that sat- isfies the context. Exhaustive access--Everything is activated in parallel at the same time, then context selects the most appropriate sense. At present, exhaustive access seems to be the favorite. According to that theory, disambiguation is a post-access process; the access process itself ks a cognitive "module," automatic and insul- ated from contextual influence. My own suspicion is that none of these theories is exactly right, and that Simpson (1984) is probably closer to the truth when he suggests that multiple meanings are ac- cessed, but that dominant meanings appear first and subordinate meanings come in more slowly and then disappear. Psychological research on lexical access is continuing; the complete story is not yet ready to be told. One aspect of the work is so obvious, however, that its importance tends to be overlooked. Semantic Fields The priming phenomenon presupposes an organization of lexical knowledge into patterns of conceptually related words, patterns that some linguists have called semantic fields. Apparently a semantic field can fluctuate in accessibility as a whole. 310 I have generally taken the existence of semantic fields as evidence in favor of theories of semantic decomposition (Miller & Johnson-Laird, 1976). The idea is that all the words in a semantic fleld share some primitive semantic concept, and it is the activation or suppression of that shared concept that affects the accessibillty of the words sharing it. I will illustrate the problem by de- Scribing some research we have been doing on vocabulary growth in school children. The results indicate that we need better ways to teach new words~ with that need in mind I will return to the question of what we can reasonably expect from natu- ral language interfaces. Nominal semantic fields are fre- quently organized hierarchically and so are relatively simple to appreciate. Verbal semantic fields, however, tend to be more complex. For example, all the motion verbs -- "move," "come," "go," "bring," "rise," "fall," "walk," "run," =turn," and so on -- share a semantic primitive that might be glossed as "change location as a function of time." In a similar manner, verbs of possession -- "possess," "have," "own," "borrow," "buy," "sell," "find," and so on -- share a semantic primitive that has to do with Eights of ownership. Not all semantic primes nucleate semanti¢ fields, however. There is a causative primitive that differentiates "rise" and "raise," "fall" and "fell," "die" and "kill," and so on, yet the causative verbs "raise," "fell," "kill" do not form a causative semantic field. Johnson-Laird and I distinguished two classes of semantic primitives: those (like motion) around which a semantic field can form, and those (like causa- tion) used to differentiate concepts within a given field. Although the nature of semantic primitives is a matter of considerable interest to anyone who proposes a sem- antic notation for writing the defini- tions that a language processing system will use, they have received relatively little attention from psychologists. Experimental psychologlsts have a strong tendency to concentrate on questions of function and process at the expense of questions of content. Perhaps their attempts to understand the processes of disambiguation will stimulate greater interest in these structural questions. THE PROBLEM OF CONTEXT The reason that lexical polysemy causes so little actual ambiguity is that, in actual use, context provides information that can be used to select the intended sense. Although contextual disambiguation is simple enough when people do it, it is not easy for a compu- ter to do, even when the text is seman- tically well-formed. With semantically ill-formed input the problem is much worse. Children's Use of Dictionaries We have been looking at what happens when teachers send children to the dic- tionary to "look up a word and write a sentence using it." The results can be amusing: for example, Deese (1967) has reported on a 7th-grade teacher who told her class to look up "chaste" and use it in a sentence. Their sentences included: "The milk was chaste," "The plates were still chaste after much use," and "The amoeba is a chaste animal." In order to understand what they were doing, you have to see the diction- ary entry for "chaste': CHASTE: i. innocent of unlawful sexual intercourse. 2. celibate. 3. pure in thought and act, modest. 4. severely simple in design or execution, austere. As Deese noted, each of the children's sentences is compatible with information provided by the dictionary that they had been told to consult. You might think that Deese's obser- vation was merely an amusing reflection of some quirk in the dictionary entry foe "chaste," but that assumption would be quite wrong. Patti Gildea and I (Miller & Gildea, 1985) have confirmed Deese's observation many times over. We asked 5th and 6th grade children to look words up and to write sentences using them. As of this writing, our i0- and 11-year old friends have written a few thousand sen- tences for us, and we are still collect- ingthem. Our goal is to discover which kinds of mistakes are most frequent. In order to do this, we evaluate each sentence as we enter it into a data management system and, if something is wrong, we describe the mistake. By collecting our descrip- tions, we have made a first, tentative classification. This project is still going on, so I can give only a preliminary report based on about 20% of our data. So far we have analyzed 457 sentences incorporating 22 target words: 12 are relatively common words that most of the children knew, and i0 are relatively rare words with which they were unfamiliar. The common words 311 were selected from the core vocabulary of words introduced by authors of 4th-grade basal readers; the rare words were selec- ted from those introduced in 12th-grade readers (Taylor, Frackenpohl, & White, 1979). It is convenient to refer to them as the 4th-grade words and the 12th-grade words, respectively. Errors were relatively frequent. Of the sentences classified so far, only 21% of those using 4th-grade words were suf- ficiently odd or unacceptable to indicate that the author did not have a good grasp on the meaning and use of the word, but 63t of the sentences using 12th-grade words were judged to be odd= Thus, the majority of the errors occurred with the 12th-grade words. Table 2 shows our current classifi- cation. Note that the categories are not mutually exclusive: some ingenious young- sters are able to make two oz even three mistakes in a single sentence. Table 2 Classification of Sentences TYPe of. Sentence 4th-arade 12th~azade No mistake 197(249) 76(208) Selectional error i0 58 Wrong part of speech 4 41 Wrong preposition 4 24 Inappropriate topic 0 24 Used rhyming word 0 14 Inappropriate object 5 9 Wrong entry 4 9 Word not used 9 1 Object missing 5 3 Two senses confounded 4 3 No response 0 4 Not a word • 3 Unacceptable idiom 3 0 Sentence not complete 3 0 Most of the descriptive phrases in Table 2 should be self-explanatory, but some examples may help. Skip the selectional errors; I shall say more about them in a moment. Cons ider "Wrong part of speech": a student wrote "my hobby is 1 istening to Ouran Duran records, I have obtained an ACCRUE for it', thus using a verb as a noun. As an example of "Wrong prepo- sition," consider the student who wrote: aBe very METICULOUS on your work." An example of "Inappropriate topic" is: "The train was TRANSITORY." An example of "Inappropriate object" is: "I was METIC- ULOUS about falling off the cliff." Ex- amples of "Used rhyming word" are =Did it ever ACCRUE to you that Maria T. always marks with a special pencil on my face?', "Did you evict that old TENET?", and "The man had a knee REPARATION o" Other categories were even less fre- quent, so return now to the most common type of mistake, the one labelled "Selec- tional error=" Vlolatlons of Seleetlonal Preferences The sentences that Deese reported illustrate selectional errors. Further examples can be taken from our data= "We had a branch ACCRUE on our plant," "1 bought a battery that was TRANSITORY," "The rocket REPUDIATE off into the sky," "John is always so TENET to me=" It is unfair to call these sentences "errors" and to laugh at the children's mistakes= The students were doing their best to use the dictionary. If there was any mistake, it was made by adults who misunderstood the nature of the task that they had assigned. Take the "accrue" sentence, for ex- ample= The definition that the students saw was: ACCRUE= come as a growth or result= "In- terest will accrue to you every year from money left in a savings bank. Ability to think will accrue to you from good habits of study." We assume that the student read this def- inition looking for something she under- stood and found "come as a growth." She composed a sentence around this phrase: "We had a branch COME AS A GROWTH on our plant', then substituted "accrue" for it. This strategy seems to account for the other examples. A familiar word is found in the definition, a sentence is composed around it, then the unfamiliar word is substituted for the familiar word. Some further evidence supports the claim that something like this strategy is being used. One intriguing clue is that sometimes the final substitution is not made= the written sentence contains the word selected from the definition but not the word that it defined. And, since substitution is not a simple mental oper- ation for children, sometimes the selec- ted word or phrase from the definition is actually written in the margin of the paper, alongside the requested sentence. These are called selectional errors because they violate selectional pref- erences. For example, the girl who dis- covered that "stimulate" means "stir up" and so wrote, "Mrs. Jones stimulated the cake," violated the selectional prefer- ence that =stimulate" should take an ani- mate object. 312 One reason these errors are so fre- quent is that dictionaries do not pro- vide much information about selectional preferences. We think we know how to remedy that deficiency, but that is not what I want to discuss here. For the moment it suffices if you recognize that we have a plentiful supply ~f sentences containing violations of selectional preferences, and that the sentences are of some educational significance. Intelligent Tutoring? Now let me pose the following ques- tion. Could we use these sentences as a "bug catalog" in an intelligent tutoring system? At the moment, intelligent tutoring systems (Sleeman & Brown, 1982) use many menus to obtain the student's answers to questions, and some people feel that this is actually an advantage. But I suspect that if we had a good language interface, one that understood natural language re- sponses, it would soon replace the menus. In any case, imagine an intelligent tutoring system that can handle natural language input. Imagine that the tutor asked children to write sentences con- taining words that they had just seen defined, recognized when a selectional error had occurred, then undertook to ex- plain the mistake. What would the intelligent tutor have to know in order to detect and cor- rect a selectional error? Otherwise said, what more would it have to know than any language comprehender has to know? The question is not rhetorical~ I ask it because I would really like to know the answer. In my view, it poses something of a dilemma. The problem, as Yorick Wilks (1978) has pointed out, is that any simple rules of co-occurrence that we are likely to propose will, in real discourse, be violated as often as they are observed. (Not only do people often say one thing and mean another, but the prevalence of figurative and idioma- tic language is consistently underesti- mated by theorists.) If we give the intelligent tutor strict rules in order to detect selectional errors like "Our car depletes gasoline," will it not also treat "Our car drinks gasoline" as an error? On the other hand, if the tutor accepted the latter, would it not also accept the former? An even simpler dilemma, one often noted, is that a system that blocks such phrases as "colorless green ideas" will also block such sentences as "There are no colorless green ideas." If our tutor teaches children to avoid "stimulate the cake," will it also teach them to avoid =you can't stimulate a cake'? When subtle semantic distinctions are at issue, it is customary to remark that a satisfactory language understand- ing system will have to know a great deal more that the linguistic values of words. It will have to know a great deal about the world, and about things that people presuppose without reflection. Such remarks are probably true, but they offer little guidance in getting the job done. Since I have no better answer, I will simply agree that the lexical infor- mation available to any satisfactory lan- guage understanding system will have to be closely coordinated with the system's general information about the world. To pursue that idea would, of course, go beyond the lexical limits I have imposed here, but it does suggest that we will have to write our dictionary not once, but many times -- until we get it right. So, while there is no principled obstacle to having large vocabularies in our natural language interfaces, there are still many problems to be solved. There is work here for everyone -- lin- guists, philosophers, and psychologists, as well as computer scientists -- and it is not abstract or impractical work. The answers we provide will shape important aspects of the information systems of the future. References Amsler, R. A. (1984) Machine-readable dictionaries. Annual Review Qf Information Science and TeGhnolouv, 19, 161-209. Becket, C. A. (1980) Semantic context effects in visual word recognition: An analysis of semantic strategies. Memory & Cooni~ion, 8, 493-512. Bol t, R.A. (1984) The Human Interface: Where People and Computers meet. Belmont, Ca]if.: Lifetime Learning. Cullingford, R. E. (1985) Natural Lan- guage Processing: A Knowledge Engine- ering Approach. (Manuscript). Deese, J. meaning. 641-651. (1967) Meaning and change of American Psvcholooist, 22, 313 Meyer, D. E., & Schvaneveldt, R. W. (1971) Faciliation in recognizing pairs of words: Evidence of a depen- dence between retrieval operations. Journal ofLExDerimental_Psvcholoav, 90, 227-234. Miller, G. A. (1977) ADDrentices¢ Children and Lanauaue. New York: Seabury Press. Miller, G. A. (1978) Semantic relations among words. In M. Halle, J. Bresnan, & G. A. Miller (eds.), L i ~ Theor~ and Psvcholoaical RealitY° C~mhridge, Mass.: MIT Press. Miller, G. A., & Gildea, P. M. (1985) How to misread a dictionary. AILA Bulletin (in press). Miller, G. A., & Johnson-Laird, P. N. (1976) Lanuuaue and Perception. Cambridge, Mass.: Harvard University Press. Procter, P. (ed.) (1978) Z d ~ tionarv of Contemporary Enulish. Harlow, Essex: Longman. chank, R. C. (1975) marion Processing. North-Holland. Conceotual Infor- Amsterdam: Simpson, G. B. (1984) Lexical ambiguity and its role in models of word recog- nition° Psvcholoaical Bulletin, 96, 316-340. Sleeman, D., & Brown, J. S. (eds.) (1982) Intelliaent Tutorina Systems. New York: Academic Press. Taylor, S. E., Frackenpohl, H., & White, C. E. (1979) A revised core vocab- ulary. In EDL Core Vocabularies in ~Eadinu. Mathematics. Science. and • " . New York: McGraw-Hill. Templin, M. C. (1957) Certain Lanuuaae Skills in Children= Their DeveloomenE and Interrelationships. Minneapolis: University of Minnesota Press. Walker, D. E., & Amsler, R. A. (1984) The use of machine ~eadable diction- aries in subianguage analysis. In R. I. Kittredge (ed.), Workshop on Sub~ lanuuage Analv~iSo (Available from the authors at Bell Communications Re- search, 435 South Street, Mocristown, NJ 07960.) Wilks, Y. A. (1978) Making preferences more active. Artificial Intslliaence, 11, 197-223. 314
1985
38
THE USE OF SYNTACTIC CLUES IN DISCOURSE PROCESSING Nan Decker 1834 Chase Avenue Cincinnati, Ohio 45223, USA ABSTRACT The desirability of a syntactic parsing com- ponent in natural language understanding systems has been the subject of debate for the past several years. This paper describes an approach to auto- marie text processing which is entirely based on syntactic form. A program is described which processes one genre of discourse, that of news- paper reports. The program creates summaries of reports by relying on an expanded concept of text grounding: certain syntactic structures and tense/ aspect oairs indicate the most important events in a news story. Supportive, background material is also highly coded syntactically. Certain types of information are routinely expressed with distinct syntactic forms. Where more than one episode occurs in a single report, a change of episode will also be marked syntactically in a reliable way. INTRODUCTION The role that syntactic structure should play in natural language processing has been a matter of debate in computational linguistics. While some researchers eschew syntactic processing as giving a poor return on the heavy investment of a parser (Schank and Riesbeck, 1981), others make syntactic representations the basis from which further work is done (Sager, 1981; Hirschman and Sager, 1982). Current syntax-based processors tend to work only within a narrow semantic domain, since they rely heavily on word co-occurrence patterns which hold only within texts from a part° icular sublangua&e. Knowledge-based processors, on the other hand, can operate on a less restricted semantic field, but only if sufficient knowledge in the form of scripts, frames, and so forth, is built into the program. This paper describes a syntactic approach to natural language processing which is not bound to a narrow semantic field, and which requires little or no world knowledge. This approach has been demonstrated in a computer program called DUMP (~iscourse Understanding model [rogram), which relies solely on syntactic structure to create summaries of one particular genre of discourse-- that of newspaper reports--and to label the kinds of information given in them (Decker, 1985). The process for creating these summaries differs sub- stantially from the word-llst and statistical methods used by other automatic abstractor programs (Borko and Beruier, 1975). The DUMP program therefore depends on a predictable discourse genre or style, rather than a predictable sublang- uage lexicon or body of world knowledge. DUMP was developed from a corpus of over 5800 words representing twenty-three news reports from three daily newspapers: the New York Times, the Boston Globe, and the Providence Journal/Evenin~ Bulletin. With one exception, each story appeared in the upper right-hand column of the front page. The stories in the corpus were chosen randomly and the only criterion for rejection was too large a percentage of quoted material. Only the first two hundred words or so of each story were included in the corpus in order to allow a greater samplin~ of reports. The discourse principles at work are fairly represented in an excerpt o ~ this length. The input to the DUMP program consists of a llst of hand-~6rsed sentences making up each story. Ideaily,.these parse trees should be the output of a parsing program. ~n fact, about one-third of the sentences were passed through the RUS parser (Woods, 1973). RUS experienced difficulty with some of these sentences for a number of reasons: the parser was operating without a semantic compon- ent, and arcs from nodes were ordered with the expectation of feedback from semantics; RUS lacked some rules for structures which appear with regul- arlt 7 in the news; It attempted to give all the parses of a sentence, where DUMP only required one, and that not necessarily the correct or complete one (about which more later); and DUMP's rules call for certain syntactic labels which are not ordinarily assigned by parsing programs (negative and adversative clauses, for example). However, it should be stressed that none of these difficul- ties represents parsing problems of theoretical import. All could he resolved by extensions to existing components of the ATN and its dictionary. THE DISCOURSE STRUCTURE OF NEWS REPORTS The syntactic rules used by DUMP work because of the predictable, almost formu[aic discourse structure of hard news reports~. Two journalistic devices above all else characterize hard news: the inverted pyramid, and the block paragraph (Green, 1979). The inverted pyramid refers to the convention of relating the most important facts of * Features, sports reports, and so forth have their own discourse structure. 315 a news story in the first paragraph, followed by less important information given in descending order (or, it may be argued, random order) of im- portance. Thus, the news differs markedly from canonical story form in which material is given in chronological order. The block paragraph, the second device, is one which stands independent of paragraphs adjacent to it. This unit contains no Logical connectives (however, in addition, ~ore- over) which link it to preceding or following paragraphs. The avoidance of such connectives allows the newspaper editor to quickly delete paragraphs from a story in the morning edition to fit into the evening edition without rewriting. The block paragraph is short: over sixty percent of the paragraphs in the corpus are only one sent- ence long; about one-half have two sentences, and less than one percent have three sentences. The effect is that most sentences of the report are presented at the same level of importance: there is no orthographic unit larger than the sentence which reliably indicates that a group of sentences is related topically or episodically. In place of the normal paragraph, we shall see, is a highly reliable level of syntactic coding which links sentences into episodes. At a lower level of organization than the in- verted pyramid and block paragraph are the two discourse units which DUMP relies on: the episode, and within the episode, the information field as found in the detached clause. News reports may contain more than one episode. A new episode begins when the set of characters and/or setting (temporal or geographical) changes. The detached clause is defined Intonatlonally: it is bounded by pauses, has falling intonation at the end, or is preceded by a clause with fall- ing intonation (Thompson, 1983). This clause is almost always set off in text with commas. So, for example, the following sentence from the ninth story in the corpus ("Ararat Forces Lose Key Position," Boston Globe, November 7, 1983) consists of four detached clauses, or information fields: (9:3)~ Arafat's soldiers, who resisted the assault, fell back sir miles to Beddawi, the remaining PiO stronghold in the area, and Nahr el Bared is now surrounded by Syrian soldiers .... The information fields here are: a nonrestric- tive relative clause ("who resisted the assault"), an appositive ("the remaining PLO stronghold in the area"), and two main clauses ("Arafat's soldiers fell back..." and "Nahr el Bared is now surrounded..."). There are a small number of syntactic forms which reliably indicate the beginning of new episodes. Likewise, there is a strong correlation * The first number indicates the story in the corpus, the second the number of the sentence within that story. between the category of information the Journalist conveys in each detached clause and the syntactic structures used for its expression. For example, the nonrestrictive relative clause in 9:3 expresses background events, the appositive expresses an identification of place, and the two main clauses express a main event and a current state, respect- ively. The next two sections will Look at the syntactic correlates of the information field and the episode boundary in detail. Syntactic Correlates of the Information Field The syntactic rules used by DUMP reflect grounding principles found universally in dis- course (Grimes, 1975). Certain assertional struc- tures in text deliver foreground information, which tells the events of the narrative and moves the story forward. These events comprise a summary of the story. Less assertional structures are used to express background, supportive information which fleshes out the skeleton provided in the foreground but does not move the action forward. There is a strong correlation between the syntactic form and information type of this supportive material which allows DUMP to subcategorize it into the following classes: past events and processes Leading up to the most recent development in the story; plans for the future; current state of the world; informa- tion of secondary importance; identifications; import of the story; effects of actions; comments made by participants in the story; and collateral (things which did not happen). This division of material into foreground vs. background gives text its texture. A narrative in which everything is presented at the same level of prominence tends to be monotonous. One of the chief means of distinguishing foreground from background is tense and aspect, which has been called a sort of flow-of-control mechanism, allow- in K the reader to pick out the most important parts of a discourse (Hopper, 1979). Sentences with simple past verbs in the active voice are the chief conveyors of foreground material in news. This fact recalls the broader concept of transi- tivit 7 put forth by Hopper and Thompson (1980), whereby certain properties of the verb and its arguments transfer the action from agent to patient more effectively than others. Foregrounded clauses have high transitivity, backgrounded clauses low transitivity. High transitivity verbs are kinetic, relic, punctual, volitional, affirmative, and realis. Kinetic verbs allow easy transfer of action from subject to object. Throw is therefore kinetic, while the copular to be is not. Telic verbs are those which express an action with a natural end- poin=. The verb make ia "John is making a chair" is relic, while the verb sin 5 in "John is singing" is not. Telic and atelic verbs can be ~istin- guisned by their entailments: if John is interrup- ted while making a chair, it is not true thac he has made a chair, but if he is interrupted while singing, it is still true that he has sung (Comrie, 1976). Punctual verbs (sneeze, kick) refer to actions with no obvious internal structure. Study and carr~ are examples of non-punctual verbs. 316 Volitional verbs ("T wrote his name") have greater transitivity than non-volitional verbs ("~ forgot his name")(Hopper and Thompson, 1980, p. 252). Affirmation distinguishes collateral information from all other types. And finally, the realis mode distinguishes events which have existed from those which only might have or would have. Main event clauses therefore never contain modals. The differential behavior of verbs from these semantic classes has been described by a number of taxon- omers (Comrie, 1976; Mourelatos, 1981; Ota, 1963; Vendler, 1967). Arguments high in transitivity are those which are strong agents, totally affected and highly individuated. Strong agents are human rather than non-human: "George startled me" has more transi- tivit 7 than "The picture startled me" (Hopper and Thompson, 1980, p.252). Objects which are wholly affected lend greater transitivity than those which are only partially affected ("I drank the milk" vs. "I drank some milk"). Likewise, more highly individuated o--~e~defined as proper, human or animate, concrete, singular, count and definite, add more transitivity than less individuated ones. These transitivity parameters assume a good deal of semantic knowledge about verbs and their arguments. In fact, the affirmative and realis features are the only ones reflected Ln DUMP's rules. But in another respect, Hopper and Thomp- son's notion of transitivity must be extended. An examination of tense and aspect alone is not sufficient to distinguish foreground from back- ground in the DUMP corpus. The type of clause In which the verb appears is also crucial. So, for example, the simple past may be used to convey both foreground and background material, depending on the type of clause in which it occurs: in main clauses, it will always convey the most recent events in a story, while in relative clauses, it will always convey past events. The first two sentences of story 6 ("Stone Meets with Salvador Rebel Official," Boston GLobe, August 1, 1983) illustrate the distinct uses of the two clause types. (6:i) After weeks of maneuvering and frus- tration, presidential envoy Richard B. Stone met face-to-face yesterday for the first time with a key Leader of the Salvadoran guerrilla movement. Here, the simple past is used in a main clause to foreground information. (6:Z) "The ice has been broken," proclaimed President BeLisario Betancur of Colombia, who engineered the meeting. The simple past engineered in a relative clause indicates background material. The information-bearing capacities of these two clause types, when they occur with the simple, active past, are in complementary distribution in newswriting. The main clause is more assertionaL than the relative clause; it is used to give information which the writer assumes the reader is seeing for the first time. The relative clause, on the other hand, is more presuppositionaL. The writer uses it to convey old information which is of Lesser importance or which the reader may already have knowledge of. Sentences 6:i and 6:Z illustrate the way in which syntactic forms provide information which might otherwise need to be culled from world know- Ledge. We know that the planning of a meeting pre- cedes its occurrence, but no such knowledge is necessary here, since the past verb form in a rel- ative clause signals an event which occurred before the main event. The so-called "hot news" present perfect i- a main clause ("The president has resigned") signals a main event if it occurs in the first sentence of a story. Its appearance further down or in a nou- main clause signals information about past events or states. Two sentences from story 16 ("Peron- ists Suffer Stunning Defeat in Argentine Vote," New York Times, November I, 1983) illustrate this. (16:1) The Leader of a middle-class party has swept to victory in Argentina's presi- dential elections .... (16:4) The e~¢~on, called by the ruling military, was a stunning defeat for the Perouists, who have dominated Argentina's political Life since their party was founded in 1945 by Juan Domin~o Peron. In 16:1, the present perfect has swept is used in the hot news sense. In 16:4, the present per- fect have dominated Ls used in a relative clause with an adverbial phrase ("since their party was founded in 1945...") to describe a state that has existed for decades. Note also that the verb dominate is atelic and non-punctual, and therefore Low in transitivity. However, knowledge of the verb's semantic class is not necessary to identify the relative clause as supportive. The mere fact that the verb is in a relative clause or the fact that the present perfect appears after the first sentence suffices. Syntactic clues may be used to avoid the need for time programs which determine the relative timing of events by interpreting adverbials. The following main clauses use the present perfect, but since they are non-initial, the states and events referred to in them must have occurred before the main event in the story ("O'Neill Now Calls Gren- ada Invasion 'Justified' Action," New York Times, November 9, 1983). (19:5) Pressures to pass a strict 60-day Legal limit [to the stay of U.S. troops in Grenada] have eased in the past week. (19:6) Both houses have passed such measures, but the Senate version has been bottled up because it was attached to a debt-ceiling bill. (i~:7) Other versions of the 60-day War Powers Resolution have been introduced but not acted upon. The appearance of the present perfect this far 317 into the story means that the time phrase in the past week does not have to be interpreted by a time program. Likewise, the use of the passive simple past in a main clause indicates that the event is supportive material: main events, it turns out, are never expressed with passive voice in the corpus. In story 14 ("U.S. Says Moscow Threatens to Quit Talks on Missiles," New York Times, October 12, 1983), there is no need to interpret the adver- bial in 1980 and in 1979 with a time program, unless relative ordering of background events is desired. The mere presence of the passive marks these events as occurring before the time of the main events in the story. (14:8) Talks on a comprehensive test ban of nuclear devices were suspended in Geneva in 1980, and the Geneva negotiations were suspended in 1979. Main events then are expressed in main clauses with simple past verbs. Events and states which existed before these main events are expressed with a greater variety of syntactic forms, from main clauses, to relative and subordinate clauses, down to noun phrases (which are not analyzed by DUMP). Nominalizations are perhaps the most fre- quent conveyors of background information In the news. The nominalization rule transforms a sent- ence into a noun phrase which can then be inserted into another sentence. St is a highly presupposi- tionai structure, since the subject and object of the original verb are often deleted during the transformation and the reader must then supply these arguments from world knowledge. An ~xampie from the second story in the corpus ("Lebanon Needs Israeli Troops, Shultz Told," Boston Globe, March 14, 1983) shows the heavy use of nominaii- zations to create a very long prepositions[ phrase which contains not a single verb: (Z:2) In the first high-Level contacts between the two governments since the start early this year of OS-Israeii-Lebanese ne~otiations on the withdrawal of Israel's forces from Lebanon, .... We will see other uses of nominalizatlon to express other information categories and to refer to episodes with a single word. The following incomplete llst gives a cursory look at the strong correlation between the remain- ing information categories in news reports and the syntactic forms used to express them. Most of the examples are from story 6, about envoy Stone's meeting with a Salvadoran guerrilla Leader, and story 16, about the defeat of the Peronists in Argentina's elections. The next two categories, Current States and Plans, also locate events or states in time, and therefore must occur in finite clauses. - Current States: This category describes the scale of the world at the time the report is written. Current states are expressed with simple present or present progressive verbs used in main clauses and in subordinate and relative clauses. (6:10) Stone has repeatedly sought to meet with political Leaders of the Salvadoran left, all of whom live in exile, .... (16=11) The country Mr. Alfonsin is due to govern is racked by a deep economic crisis. Plans: These may be expressed with appropriate modals (will, ~ , would) in the same struc- tures used for Current States. (6:10) His mission is to encourage participa- tion by the left in Salvadoran elections, which will probably be held in March 198~. (16:10) Military officials said the ruling junta would consider it in a meeting Tuesday. Certain verbs which express present planning (come , go,leave, start) can be used to indicate future time with the present tense: "Fiscal year 1983, which begins Oct. 1 .... ". It seems to be a discourse principle of Jour- nalese that while non-main events may be "promo- ted" to expression by the most assertive clause type, they may also be expressed with less asser- tional forms: subordinate and relative clauses, nominailzations, etc. The converm, however, is not true. Main events may never by "demoted" to expression by any other than the most assertive form. The remaining information types do not Locate actions in time, and therefore are free to appear in constructions without finite verbs. Import: This category is occasionally expressed with equative sentences of the form: NP V-be NP. The subject and predicate NPs tend to be nominaLizations, with the former referring to the main episode. (16:4) The election...was a stunning defeat for the Peronists .... Election refers to the main event introduced in 16:i. 16:4 tells why that event is newsworthy. Nonrestrictive PPs with nominalizations as heads may also express Import: (4:1) The...Budget Committee, in a major blow to President Ronald Reagan, voted yesterday to hold the real growth in defense spending to 5 percent next year .... ("Senate Panel Trims Reagan Arms Budget," Boston GLobe, April 8, 1983) Identifications: With only one exception, all identifications in the corpus are made with pre- nominal modifiers ("Prime Minister Smith") or with appositives, which may be embedded recur- siveLy: (6:3) ...Stone...talked with Ruben Zamora, the No. 2 Leader of the Revolutionary Demo- 318 cratic Front, the:politicaL arm of the five Marxist-led guerrilla bands fighting gov- ernment forces here. Effects: Detached participial phrases are used to tell the effects of the actions described in main clauses. (16:1) The leader of a middle-class party has swept to victory in Argentina's presi- dential elections, handin~ the union-based Peronists their first election defeat ~n nearly four decades. Comments: Comments are simply quotations from people involved in an event. While in other narra- tives, dialogue is often the chief means of tell- ing a story and moving the action forward, this is not the case in newswriting. Mere, quotes from participants add flavor and give supplementary information, but they are never the sole vehicle for informing readers of an event. This is a lucky fact, sSnce the syntactic forms used in quoted speech are usually much less constrained than those in non-quoted portions. (16:5) "We are entering a new stage," the 56-year old Mr. Alfonsin, whose politics are Left of center, said in a television interview early today. Collateral: News reports tell what did not happen in a story, what events and processes never were, with surprising frequency. This information category is expressed by negations of clauses, including negative existentials, neg- ative subordinate clauses, and various negative prefixes and prenominal modifiers. (6:7) Salvadoran officials had no immediate comment on what they heard from Stone .... (6:9) Stone had been unable to arrange a meeting with the Salvadoran rebel leaders... earlier this month. If it were the case that the correspondence between a syntactic form and the information types it expresses was one-to-many, this relation would not be of much help in automatic processing. In fact, the correspondence is closer to one-to-one, so that, for example, equatives only express im- port and not identifications, as would be natural in conversational English ("Smith is mayor of the city"). DUMP was successful in creating good summaries and labeling the information types for all but two of the twenty-three stories in the corpus. These two exceptions were highly eventful, chronological accounts and DUMP had difficulty distinguishing minor events from major ones. in addition, after the completion of the program, it performed well with a final story not from the corpus. Syntactic Correlates of Episode Boundaries About one-thlrd of the stories in the DUMP corpus consist of more than one episode. Story 17, given here with its DUMP-derived analysis of infor- mation, contains three minor episodes in addition to the major one introduced in the first sentence of the report. The discussion below of syntactic forms used to indicate episode boundaries will call upon this story for examples. Story 17 The New York Times, November 4, 1983 "Senate Approves Secret U.S. Action Against Managua" By Martin Tolchin Special to the New York Times Washington, Nov. 3 - i. The Senate today approved by voice vote continued aid for covert operations In Nicaragua. Z. The approval was made contingent upon notification to the intelli- gence committee of the goals and risks of specific covert projects. 3. The action would provide only $19 million of the $50 million that the Administration sought for covert operations in Central America, mostly in Nicaragua. 4. Those funds are expected to run out in less than six months, when the Central Intelligence Agency would have to give an account of its activities as it sought the rest of the funds. 5. The vote followed an hourLong debate that focused on covert United States activity in Nicar- agua, which was banned in a Mouse-passed bill. 6. The Mouse bill would provide $50 million in open assistance to any friendly Central American govern- ment. 7. Mouse and Senate conferees will now seek to resolve differences in the two measures, and the Nicaraguan dispute is expected to be a stumb- ling block in the negotiations. Judge Orders Investigation 8. In San Francisco, a Federal district judge ordered Attorney General William French Smith to conduct a preliminary investigation of charges that President Reagan and other Government officials violated the Neutrality Act by supporting the activities of paramilitary groups seeking to over- throw the Nicaraguan government. 9. The ruling came in a lawsuit filed by Representative Ronaid V. DeLLums, Democrat of California [Page A9]. I0. Senator Daniel Patrick Moynihan, the New York Democrat who is vice chairman of the Intell- igence Committee, told the Senate that the Admin- istration had modified its covert policy Last summer, and was not supporting the insurgents seeking to overthrow the Sandinista government. Summary of Main Events: The Senate today approved by voice vote continued aid for covert operations in Nicaragua. Senator Daniel Patrick Moynihan told the Senate that the Administration had • Dump does not analyze either subtitles, which n~t all newspapers use, or titles. 319 modified its covert policy last summer and was not supporting the bnsurgents seeking to overthrow the Sandinlsta government. Past Events: ...which [covert US activity in Nicaragua] was banned in a House-passed bill. Current State: Those funds are expected to run out in less than six months. ...the Nicaragua dispute is expected to be a stumbling block in the negotiations. Plans: Sentence 3. ...when [in Less than six months] the Central IntelLigence Agency would have to give an account- ing of its activities as It sought the rest of the funds. Sentence 6. House and Senate conferees will now seek to resolve differences in the two measures. Secondar),:* The approval was made contingent upon notification to the intelligence committee of the goals and risks of specific covert projects. Identifications: ...Moynihan, the New York Democrat who is vice chairman of the Intelligence Committee. The remaining uncategorized sentences are episode markers and will be discussed below. * * * * * As noted earlier, orthographic paragraphs are not used in newswrittng to indicate episode boundaries. In their place are a small number of constructions which regularly introduce new episodes, relating them temporally to previous episodes. These structures include the double container sentence, the sentence introduced with a won-restrictive location PP, the LinkS, and the detached time adverbial with a nominaLizatiou in it. The first four sentences of s~ovy 17 concern the m=%n episode. A new, minor episode is intro- duced by the double container in sentence 5. This kind of structure has a verb from the small class (e.g. precede, follow, result in) which may take a nominalization in both subject and object posi- tion. The subject refers to an old episode and the object to a new one. (17:5) The vote followed an hourlong debate that focused on covert United States activity in Nicaragua .... The subject vote refers back to the story's main event, the Senate vote in the first sentence. The object, or new episode, is the nominalizatton debate. The object also tells of another episode concerning passage of a House bill. This bill episode is developed in 17:6 and 17:7. The second minor episode is introduced with a * This category is not a very reliable one. It includes clauses with passives and copulas. simple detached PP of location in 17:8. This structure is used to shift the setting from the dateline location to a new place. In this case, the action moves from Washington to San Francisco: (17:8) In San Francisco, a Federal district Judge ordered Attorney General William French Smith to conduct a preliminary investigation of charges that President Reagan and other Government officials violated the Neutrality Act .... This episode is not developed any further in this report, but is interrupted in the next sent- euce, a LinkS, by the third minor episode. The Links Is of the form: The nominalized subject refers back to a previous episode and the object of came refers to a new episode. The conjunct or ~r--~osition shows the new episode's temporal relation to the old. (17:9) The ruling came in a lawsuit filed by Representative Ronald V. Deilums, Democrat of California. [Page AP. I The lawsuit episode is developed elsewhere in the paper. The page reference closes this episode, and therefore, since 17:10 contains no reference to a new place or time, and has a simple past main verb (~oLd), it must by default be part of the original, main episode. This decision is supported by the eleventh sentence in the story (not included in the corpus): After this policy change, Mr. Moynihan said, the committee approved additional funds. There is no example of the final episode marker in story 17--the sentence introduced by a detached time adverbial with a nominalization in a time phrase ("Two hours before the vote"; "During the Pope's visit")° The nomlnalization refers to a previous episode and the main sentence to which the whole adverbial phrase is attached introduces the new episode. Story 10 ("French Jets KetaLiate, Hit Shiite Positions," Boston GLobe, November 18, L983) begins vith French planes bombing Iranian- backed militia in Lebanon. A related episode starts in sentence 5: (10:5) Six hours after the French air attacks, gunmen fired rocket-propeLled grenades and automatic weapons at a French peacekeepin~ post in the Shiite Moslem neighborhood of Khandik Ghamik in West Beirut. Each episode in a report has the potential to contain its own main events, background events, plans, current states, identifications, and so forth. An extension of DUMP's labeling ability would be the creation of a discourse tree for each news report, with a root node dominating episode nodes, which in turn dominate relevant information categories. 320 THE DUMP PROGRAM DUMP works very simply. It takes as input parsed sentences of a story and searches through them for the kinds of syntactic labels described above (declarative sentence, detached PP, etc.). These labels introduce information fields, each of which is stored on a stack. A set of rules is then applied to each entry on the stack, and assignment of each entry made Co one of the information categories on the basis of the struc- tural label and optional tense/aspect marker. DUMP does not need a full parse of a sentence to assign syntactic structures to a partlcular information category. For example, it does not need to know anything about the attachment of clause-lnternal PPs, a difficult problem for parsing programs. Furthermore, newswriting (with the exception of quoted portions, which DUMP does not need parsed) does not reflect the use of a full grammar of English. The corpus contains no question forms and a number of the "stylistic" transformations (pseudo-cleft, coplcaLizatlon are examples) do not appear. The question of whether some kind of "fuzzy" parser with a limited number of rules could provide adequate output for DUMP is one ~or further research. On the other hand, whatever parser is used to prepare input for DUMP will need certain labels not ordinari~y found in parse trees: sentences are not usually distinguished as equative or double container in type. Furthermore, DUMP requires some non-standard features on words. For example, we have seen in a number of instances how crucial it is to mark nouns as nominalizations. RELATION TO OTHER WORK The DUMP program embodies principles useful both to the processing of sublanguages and to AI research. In the former case, these principles allow preliminary automatic processing of texts within the same genre, regardless of the breadth of the semantic field. As noted earlier, current work with subLanguages relies on word co-occur- rence classes which result from their very constrained subject matter. Newswriting covers a wide range of topics and therefore word co-occur- rence classes are not an efficient method of automatic processing. However, these reports do show predictable constraints in the use of syn- tactic constructions to express particular kinds of information and it is this regularity that DUMP depends upon. In the case of AI research, DUMP can serve as a support program to knowledge-based processors. The FRUMP program (DeJong, L979), for example, creates summaries from sketchy scripts by looking for key requests, or main events, in the text. So, the script for an earthquake story might contain key requests for information about the quake's rating on the Richter Scale, the amount of property damage It did, where the epicenter was located, and how far shock waves were felt. FRUMP would then look to the newspaper text for evidence of each of the key requests in the script. The scripts are written by the programmer, based on his or her assumption of the most important information likely to be found in all stories about a particular topic. DUMP is feted from reliance on such scripts because of the fact that the news reporter, however unconsciously, encodes key requests syntactically. DUMP can locate these key requests easily and also signal the beginning of new elpsodes, thus facilitating one of the tasks which FRUMP finds most difflcu~t--thafi of script selection. (Imaglne the confusion that could result in scot 7 17 when the Congressional script is interrupted in the eighth sentence by an episode requiring a judicial script.) Once all of the detached clauses and episodes in a report have been correctly ~abeLled by DUMP, a knowledge- based processor could then go about building conceptual representations for each unit. It is expected that DUMP's approach could be extended to other genres of writing, since most texts achieve texture by distinguishing foreground from background. However, texts vary in the pro- portion of foregrounded to backgrounded material and in their pref~ence for certain forms to convey grounding. The literary style of a discourse will therefore influence the design of automatic text processing programs. The style of news reports is relatively subordinated, non-redundant, and predi- catlonaiiy dense. The sentences in the DUMP corpus average 2.88 predications per sentence, as compared to a high of 2.78 in the informative sections of the Brown corpus and 2.6A across all genres (Francis and Kucera, 1982). The term predication refers co both the flniCe and non-flnlCe types, and therefore the 2.88 figure indicates that the news corpus is characterized by a great deal of embedd- ing of both types: finite clauses (relative clause~ adverbial clauses), and well as non-finites (infin- itive complements, reduced relatives, participials). It can be hypothesized that a highly predicated writing style such as Journalese will show greater variety in its syntactic structures than a style with few predications per sentence. This syntactic diversity will reflect a text with less fore- grounded material--in short, a text with greater texture. A further hypothesis is that in a predi- rationally dense style there will be a stronger correlation between syntactic forms and the par- titular Information types expressed by these forms. It seems likely that a genre which uses few pred- ications per sentence would consist chiefl 7 of main clauses used as the workhorse to express all kinds of information: background, main events, plans, import, and so forth. Some of these information categories will be distinguishable by verb tense, aspect, mood and voice, as in the news. But others will have to rely on world knowledge for categori- zation. As an example, consider a revised version of the opening of story 6, rewritten so that em- bedded clauses in the original are expressed as main c~auses: Richard B. Stone met face-co-face today with a key leader of the Salvadoran guerrilla movement. He spent several frustrating weeks 321 maneuvering the meeting. "The Ice has been broken," proclaimed President Belisario BeCancur of Colombia. He engineered the meeting. Knowledge about the way plans are made would be needed to distinguish foreground from background in these sentences. One further metric can be hypothesized for determining discourse genres suitable for syntactic analysis. In syntactic theory there is a well- known correlation between the flexibility of word order in a language and its use of morphosyu- tactic Inflections. Languages llke English which have Lost most of their inflectional markers rely on rigid word order to establish syntactic relations. On the other hand, highly inflected ~anguages llke Latin can afford greater flexibility in word order since inflections on the ends of words indicate their function in the sentence. An analogy might be drawn in which syntactic structures correspond to morphosyntactic [nflec- Lions and information order in discourse corres- ponds to word order. The discourse structure of news reports violates canonical story form. The writer does not start at the beginning and relate events through to the end. The potential confusion introduced by this unpredictability is compounded by the density of new information in news reports. Perhaps the great regularity in the use of distinct syntactic forms to express the types of information conveyed in the news serves to compensate for the flexibility ~n discourse structure. It is as though the strong correlation between syntactic form and tnforma~ion type frees the reader to process the large amount of new information being delivered. Just as inflectional endings allow the Listener to assign words to their functional slots regardless of the order in which they appear, so the syntactic correlates to information types allow the news reader to quickly assign phrases their function in the discourse. Stories which adhere to a standard story grammar do not need such syncactlc regularity, since the position of the material in the text indicates its function. The extension of a program Like DUMP to other discourse genres would require, first, the identification of the information categories expressed by the kind of text. Cookbooks, for example, convey instructions and descriptions, not main events, effects and identifications. Secondly, correlations between syntactic form and information type and the syntactic means for ~ndicating episode boundaries must be determined. The degree of correlation between syntactic form and £nformation type in non-news genres is a matter for further investigation. ACKNONLEDGMENTS This research was carried ouC under grant G008101781 from the U.S. Department of Education, Program for the Hearing Impaired. REFERENCES Borko, Harold and Bernier, Charles. 1975. Abstractin~ Concepts and Methods. New York: Academic Press. Comrie, Bernard. 1976. Aspect. Cambridge: Cambridge University Press. Decker, Nan. 1985. Syntactic clues to discourse structure: A case from journalism. Ph.D. dissertation, Brown University. DeJong, Gerald. 1979. Skimming stories in real time: An experiment in integrated understanding. Research Report #158, Depart- ment of Computer Science, Yale University. Francis, W. Nelson and Kucera, Henry. 1982. Frequency Analysis of English Usage. Boston; Houghton-Mifflin Company. Green, Georgia. 1979. Organization, goals and comprehensibility in narratives: newswriting, a case study. Technical Report #132. The Center for the Study of Reading, University of Illinois at Urbana-Champaign. Grimes, Joseph. 1975. The Thread of Dlscourse. Janua Linguarum, Series Minor, no. 207. The Hague: Mouton. Hirschman, Lynette and Sager, Naomi. 1982. Automatic information formatting of a medical subtanguage. In R. Kittredge and J. Lehrberger (Eds.), SubLan~ua~e: Studies in Language ~n Restricted Semantic Domains. New York: Walter de Gruyter. Hopper, Paul. 1979. Aspect and foregrounding in discourse. In T. Glvon (Ed.), Syntax and and Semantics, rot. 12. New York: Academic Press. and Thompson, Sandra. 1980. Transitivity in grammar and discourse. Language 56: 251-299. Mourelatos, Alexander. 1981. Events, processes and states. In P. Tedesch£ and A. Zaenen (Eds.), Syntax and Semantics, vol. Z4. New York: Academic Press. Ota, Akira. 1963. Tense and Aspect of Present- Day American English. Tokyo: Kenkyusha. Sager, Naomi. 1981. Natural Language Infor- mation Processing: A Computer Grammar of English and its Applications. Reading, MA: Addison-Wesley. Schank, Richard and Rlesbeck, Christopher. 1981. Inside Computer Understanding. Hillsdale, NJ: Lawrence ErLOaum Associates. Thompson, Sandra. 1983. Grammar and discourse: The English detached participial phrase. In F. Klein-Andreu (Ed.), Discourse Perspectives on Syntax. New York: Academic Press. 322 Vendler, Zeno. 1967. Linguistics in Philosophy. ~thaca, N¥: Coruell University Press. Woods, W~lliam. 1973. An experimental parsing system for transition network grammars. In R. Rustin (Ed.), Natural Language Processing. Englewood Cliffs, NJ: Prentice-Hall. 323
1985
39
CLASSIFICATION OF MODALITY FUNCTION AND ITS APPLICATION TO JAPANESE LANGUAGE ANALYSIS Shozo NArro, Akira SHIMAZU, and Hirosato NOMURA Musashino Electrical Communication Laboratories, N.T.T. 3-9-11, Midori-cho, Musashino-shi, Tokyo, 180, Japan Abstract This paper proposes an analysis method for Japanese modality. In this purpose, meaning of Japanese modality is classified into four semantic categories and the role of it is formalized into five modality functions. Based on these formalizations, information and constraints to be applied to the modality analysis procedure are specified. Then by combining these investigations with case analysis, the analysis method is proposed. This analysis method has been applied to Japanese analysis for machine translation. 1. Introduction Since the meaning of a sentence consists of both proposition and rnodality, TM analysis of modality is as indispensable as that of proposition for natural language understanding and machine translation. However studies on natural language analysis have mainly concerned with the propositional part, and algorithms for analyzing rnodality have not yet been sufficiently developed. The aim of this paper is to clarify the function of modality and to propose a method for analyzing the modality in Japanese sentences. Structure of a Japanese complex sentence can be formalized roughly by iterative concatenation of simple sentences. The simple sentence consists of cases and a predicate. The cases have surface representations of noun phrases or adverb phrases while the predicate has that of verb or adjective or adjective verb. A noun phrase is defined as the recursive concatenation of noun phrase or that of embedded sentence. We have employed '.he case structure as a basic meaning structure for a simple sentence, and extended it to retain the construction of complex sentences mentioned. Modaiity is additive information represented by auxiliary words such as modal particles, ending particles, and auxiliary verbs and sentence adverbs. The modal particle is attached to a noun phrase or a sentence element while the ending particle is attached to the enci position of a sentence. The auxiliary verb !mmediately follows a verb phrase. Modality represented in such grammatically different context is incorporated into the case structure, and the result construction is named as an extended case structure Ivl which enable us to propose a uniform framework for analyzing both proposition and modality. In this paper, we first classify modality into four semantic categories. Second, we define five modality functions using the logical representation of the meaning and then characterize the roles of each function. Third, we specify hard problems to be resolved in modality analysis. Fourth, we list the information and constraints to be considered in establishing the procedure of modality analysis. Then, we propose a method for analyzing modality based on these investigations. Finally, we exemplify the analysis by showing translations from Japanese into English. The method has been used to analyze Japanese sentences in a machine translation system. 17~ 2, Classification of modality Traditionally, modality has been classified into three categories, i.e. tense, aspect and modal. :0-! This classification is not sufficient for the deep analysis of the meaning structure of a sentence, however, because it does not account for the role of Japanese modal particles. Adding this role, we expand this classification into four categories, namely tense, aspect, modal and implicature shown in Table 1. Each category can be further classified into subcategories, and those are shown in Table 2 through Table 5 (Each table gives both examples of Japanese expressions and their English equivalents). Our classification of modality features two characteristics concerning the assignment of adverbs and modal particles : (1) Among the two kinds of adverbs, namely sentence adverbs and case adverbs, we assign sentence adverbs to modality while case adverbs to case relations. Sentence adverbs are classified into three subcategories in the modal Table I. Four categories of Modalitv Categories Meaning Tense i temporal view of a event relative to the speaking time state of events viewed from time progress at a Aspect sl:ecifled time point Modal speaker's or agent's attitude or judgement to the occurrence of events implicative meaning represented by modal I mplicature particles 27 category : [evaluation], [judgement] and [statement-manner]. (Traditionally, all adverbs are assigned to modality.) (2) Modal particles are assigned to modality and are classified into a distinct category, implicature (They have been usually discussed separately from modality) ~41. 3. Modality functions and their roles By employing logical expression as the representation of the meaning structure, we can define modality functions as operations on logical expressions in strict terms. In the past, studies on modality analysis in logical framework treated each type of modality individually. IsH6] Here, we deal with it, however, as a whole and combine it with the propositional structure so that we can provide a uniform framework for the representation and the analysis of the meaning structure. In this purpose we employ the higher order modal logic formalism. It1 In this regard, we introduce the five types of modality functions, which add or modify modality : {I) addition of the modality operator. {2) surface modification of the case structure. (3) semantic modification of the case structure. (4) determination of the scope of negation, (5) addition of the implicative meaning. We will now discuss the roles of each type of modality function respectively by indicating their logical representations. 3.1 Addition of the modality operator This is the most fundamental function and it simply adds the modality meaning to the propositional meaning. In the following two sentences, (sl) has no modality while (s2) has modality : (sl) Hiroko ga hashiru. (Hiroko runs.) Run(Hiroko), "~ In the ~'ollowing. each example sentence is succeeded by an English translation and a logical representation .f the meaning Table 3. Tense Japanese Meaning ,expression Past ta Non-past ru English expression -ed (past tense) present tense, or future tense (S2) Hiroko ga hashit teiru. (Hiroko is runnzng.) [durative] Run(Hiroko). (s2) is obtained by adding the durative aspect operator "teiru (progressive)" to (sl) c'~. 3.2 Surface modification of the case structure This does not change the logical meaning structure even when the surface structure is modified. However higher level information such as focus and attention is sometimes added. The passive auxiliary verb "reru" or "rareru" can modify the surface case structure without changing the logical meaning structure. The focus is usually placed on the ~ubject part of the passive sentence, as follows : (s3) Hiroko ga yasai we taberu. (Hzroko eats vegetables.}, 3x(Vegetable(x)AEat(Hiroko,x)), (s4) Yasai ga Hiroko ni tabe rareru. (Vegetables are eaten by Hzroko.), 3x((Vegetable(x)AEat(Hiroko,x))A{Focus(x)}), where the predicate Focus(x) signifies that the focus is placed on the argument x. 3.3 Semantic modification of the case structure This results in one of the two alternatives : (a) one argument is added to the original predicate, (b:, a higher order predicate is introduced. Both changes are equivalent in meaning but the way of representing the change is different. The following fragments of modality cause the semantic modification of the case structure : I) causative Cseru" or "saseru"), 2J affected-passive Creru" or "rareru"), 3) hope Ctehoshii'" and "temoraitai"), 4~ request ,~"temorau"), 5) benefit ("tekureru .... teageru", and "teyaru"). Tabie 2. Aspect ( tdou means concatenation, and d~ mtans empty character.) Meaning Japanese expressi.n ~ Er~glish expression Inchoative • ] ust-bei'or e- incJ'd~a tive haji mf~ru, - kakeru. ~dasu I . . . . (-hajimeru, *-kakc:u ~dasuJ (tokoro, bakari;, u~.osuru, tokoro, bakari [inchoa=ive verhl begin, commence, start: 'set about -. -ing'. fai to. c~me to, take to I be go ng to. be go=ng to-*-[inchoative verbl just have [inchoative verbi-en Just-afterdnchoative i--ha'line. ~kake. ~dashi#. ta - (tokoro, hakari) Durative ~teiru, ~ e.ru, ~tsuLukert:, ~tesrutokoro, 11dut-:ttive verb~ go on, "keep (onJ *- -ing'. continue, remain, teik u. ~ t~utsuaru I ver.h + on and on, over and over, (repetition of verb) Iterative -teiru, ~teoru, -tsuzukeru t verb reDresnntin~ repetition of action (durative verbl Terminative J ust-before-termin:, te --owaru, --oeru, -teshimau (-owaru, -oeru, -teshimau) - (tokoro, bakarD (-owat, -oe, ~teshimat, d#). ta- (tokoro, bakari) ! ~owat, -oe, --te.~himat, ~b) • telru J ust-after-terminative Terminative- qtate g r {, I I {affected verbl cease, finish, leave off, discontinue, 'stop d- -ing' be going t.o -¢- { affected verbl [ just have {affected verbl-en i huve-,~..en 28 For an example, the causative auxiliary verb "seru" or %aseru" results in (a) the addition of the causative agent, or (b) the introduction of a second-order predicate CAUSE(x,y) in which argument x represents the causative agent and argument y represents a predicate, as follows : (s5) Taro ga Hiroko ni yasai wo tabe saseru. (Taro makes Hiroko eat vegetables.) (a)3x(Vegetable(x)/',Eat'(Hiroko,x,Taro)), or (b)3x(Vegetable(x)ACAUSE(Taro, Eat(Hiroko,x))), where the predicate Eat'(x, y, z) is obtained by adding the argument z corresponding to the causative agent to the predicate Eat{x, y) in (s3). For another example, though the auxiliary verb "reru" or "rareru" has five meanings, namely, "passive", "affected-passive", "ability", "respective" and "spontaneity", "passive" meaning among them falls into type (2) above while "affected-passive" meaning falls into this type and the affected-agent is added : Is6) Taro ga Hiroko ai yasai wo tabe rareru. (Taro was'adversely) affected by Hiroko's eating vegetables.) (a) 3x(Vegetable( x }/xEat"(Hiroko.x.Ta ro)), or (b)3x(Vegetable(x)AAFFECTED-PASSIVE (Taro,Eat(Hiroko,x))). 3.4 Determination of the scope of negation Table 5. Implicature Meaning Limitation Degree Extreme-example Japanese expression shika, kin, dake, bakari, made, kurai dake, bakari, hodo, kurai sac, demo, datte, made English expression only as, about even Stress sae, ha, too, koso even Example demo, nado, nari for example Parallel yara, ya, mo and Addition I sae, made also Selection earl, ka or Uncertainty ~ara, ka some Distinction ha us for The modal particle "wa" determines the role of the auxiliary verb "nai" as a partial negation while the case particle "ga" determines it as total negation. In the following sentences, (s9) is partially negated while (s8) is totally negated : (s7)Zen'in ga kuru. [Everybody comes.) vx(S(x)3Come(x)), (s8)Zen'in ga ko nai. (Nobody comes.) vx(S(x) ~ ~ Come(x)), (sg)Zen'in wa ko nai. (Not everybody comes.) -- vx(S(x~ ~Come(x)), where the predicate S(x) denotes "zen'in [all the persons)". Table4 Medal Meaning I .Japanese expression English expression Meaning .Japanese expression Negation nai, zu not. never temiru Ability dekiru, uru, reru. rareru can. he able to. be possible Spontaneity ~ reru, rareru heccme to nakerebanaranai, must. should, Obligatoriness m~banaranai, bekida have to Necessity ! hitsuyougaaru he necessary lnevitabdity canno! help ...ing zaruwoenai, hokanai hougayoi. I nikoshitakotohana I saesurebavoi. " Try !Command Question nasal, [imperative form of verbl ka English expression try [imperative form of verbl [ interrogative transformationl Request tekure, retai please ... (to 2nd personl Permission teyoi may. can Invitation Let's, Shall we U I sere. saseru Preference may well Causation Sufficiency bajuubunda, bayoi he enough Request " (to 3rd personl temorau Stress noda, nodearu do Passive reru. rareru Certain-presumption !hazuda. nichigainai must 1-ncertain-conclusion vouda, souda he likely :'resumption rashii ~eem Guess u, you. darou, think toom(~wareru Uncertain-guess kameshirenai may Hearsay soucta I l hear that ! I'. is said that ... int.ention , u, :sumortda. utoshiteiru be going to. will. Plan voteidearu, have a plan to kotonishiteiru tai. tehoshii, hope, want Hope temoraitai I make (a person, ',, do get (a person~ to do. have {passive transformationl [affected-passive • ~.ffected-pass~ve reru. rareru transformationr 13enefit tekureru ! have la person~ to do desu, masu Politeness Respect Evaluationl reru. rareru saiwalnimo, zannennakotoni. odoroitakotoni .... [Judgement] osoraku, kanarazu, akirakani, omouni .... genmitsuniitte, (Statement-mannerJ yousuruni, hontounotokoro .... fortunately, regretably, to our surprise .... perhaps, surely, evidently, in my opinion .... in short. strictly speaking, in all fairness .... 29 3.5 Addition of the implicative meaning An extra logical formula corresponding to the implicative meaning is added by modal particles such as %hika (onlyf and ~dake (only)" as in : (sl0) Hiroko wa yasai shika tabe nai. (Hiroko eats nothing but vegetables.) ~x(Vegetable(x)AEat(Hiroko,x)) Avx( -~Vegetable(x)~ -, Eat(Hiroko,x)). 4, Problems in modality analysis 4.1 Ambiguity of the modality meaning (I) Ambiguity due to multiple meaning The aspect expression "teiru" has three different kinds of meanings, that is, the "durative", "iterative" or "terminative-state" aspects. For example, (sll)Hiroko ga yasai wo tabe teiru. (Hiroko {is eating, eats and eats. has eatenl vegetables.) 3x(Vegetable(x) /x{[durative],[iterative],[ termina tire-state]} Eat(Hiroko, x)). (21 Ambiguity concerned with case structure As stated in Section 3.3 above, the auxiliary verb "reru" or "rareru" has five meanings, and, among them, the "passive" and "affected-passive" meanings result in modification to the case structure. Therefore, disambiguation of the meaning of ~reru" or "rareru" has a close relationship to analysis of the propositional meaning. Moreover the auxiliary verb "rareru" in the following {s12) means "respect", and that in (s13) means "passive", respectively. Whereas, both expressions are same except the additional meaning of respect and focus, as follows : (sl2)Sensei ga yasai wo tabe rareru. (The teacher eats vegetables.) 3x(Vegetable(x)/kEat{the-Teacher,x)) ARespect(Speaker,the-Teacher), (sl3)Yasai ga sensei ni tabe rareru. (Vegetables are eaten by the Teacher.; 3x((Vegetable(x)/xEat(the-Teacher,x)) /x{Focus(x)}), where the predicate Respect{x,y) means that x respects y. 4.2 Scope of modality Even if',he main clause has a negative expression, it does not always mean that the main clause is negated. Sometimes the subordinate clause is negated. We call this phenomenon the transfer of negation. Furthermore even if rnodality involved is not negation, it sometimes affects the subordinate clause. Although the main clause in the following (s14) is not usually negated, the subordinate clause is. Nevertheless, the tense information in the main clause has an effect on the subordinate clause. (s14) is constructed from (s14-1) and (s14-2) by a simple coordinate conjunction, however the corresponding logical expression is not a simple concatenation of each logical expression : (sl4)Taro wa hige wo sot be kaisha e ika nakat ta. (Taro went to the company without shaving.) [past] -- Shave(Taro,beard) A{past]Go(Taro,Company), (sl4-1)Taro wa hige wo soru. (Taro shaves beard. Shave(Taro, beard), (sl4-2)Taro wa kaisha e ika nakat ta. (Taro did not go to the company.) [past]-- Go(Taro, Company). (sS) and (s9) also exemplify the problem for determining the scope of negation. 4.3 Treatment of implicative meaning Modal particles such as "shika (only)" and "sae (even)" convey individual implicative meaning. In order to obtain the logical representation of the implicative meaning, we are forced to provide different formulae expressive of the each meaning of each modal particle. For example, if we assign the formula (fl) to the expression %hika...nai" which consists of the modal particle "shika" and auxiliary verb "nai", we get the logical representation of the sentence Is10) by the procedure of ~,-calculus shown in Fig. I. (fl)"shika...nai'-- ~LP,kQkR(3x(P(x)ARQ(x)) AVx(-,P(x)~R--Q(x))). As can be seen from the example, the logical formula for the implicative meaning is very individual. This concludes that specification of it for each meaning is very complicated and hard, and a more effective method is therefore needed. 5. Information and constraints on modality analysis (1) l,exicai meaning The lexical meaning assigned to each modality expression is the most fundamental information. So we need to specify and provide it. For example, the lexical meaning of the auxiliary verb "ta" is generally the "past" tense as in : (slS)Hiroko ga hashit ta. (Hiroko ran.) [past]Run(H.;roko). (2) Predicate features Predicate features are available for disambiguating the meaning of modality. Though the aspect auxiliary verb "teiru" is ambiguous in meaning, we can resolve it by using predicate features such as the "stative", "continuous" and "spontaneous", as in : 30 (sl6)Hiroko ga hashit teiru. (Hiroko is running.) [durative]Run(Hiroko), (sl7)Akaxi ga kie teiru. (The light is turned off.) [terminative-state]Turn-off(the-Light), where the verb mnashiru (run)" has the "continuous" feature while the verb "kieru (turn off)" has the "spontaneous" feature. The aspect expression "teiru" following a "continuous" verb usually means the "durative" aspect, and "teiru" following a "spontaneous" verb usually means the "terminative- state" aspect. The "spontaneity" meaning of "reru" or "rareru" is realized only when it follows the verbs having spontaneity feature such as "omoidasu (remember)" and "anjiru (care)". (3) Noun phrases and adverbs Some kinds of noun phrases, adverbs, and their semantic categories can be utilized to disambiguate the meaning of modality, when they occur simultaneously with it. (sl8)Hiroko ga yasai wo i.m.a tabe teiru, (Hiroko is eating vegetables now.) 3x(Vegetable(x) A[durative]Eat"(Hiroko,x,now)). "Hiroko"-- ,\PP.\QQ( HirokoJ "yasai"-- .\PP.\xVegetable(x} "shika...n a]"-- ,~.P,\Q,k R( qx( P( x )Zk RQ( x )I AVx~ ~PfxJDR~QIxJJ) "taberu"-- .~ySzEatfz,yJ "yasai shlka._ nai" -- ),PP.kx Vegetable( x hkR.\S~T( 3ul R(uJATS(uJ~ AVu( ~ R(ul DT ~ S(u))t -- SR.\SLT( -3u(R(uJATS( u))AVu( ", R{ u~ DT ~ S(u)~) .\ x Vegetable( x; -- .\ShT( B u(.kx Vegetable( x J( u J/x.TS(u)) AVu( ~ .\x Vegetable( x }( u~ ~T ~ S( u J)) --,\S~T("]u(Vegetable(u~ATS(u) JAVut -~ Vegetable(u# DT ~ S( u~)~ "yasai shika tabe nai" -- S$},Tf 3u(Vegetabie(u)ATS(u)) AVu( "- Vegetable(u) DT " S( ul)lAyAzEat(z.yl --.~T( 3tu Vegetable( u)AT,\y,kzEat(z,yi( u D AVu( ~ Vegetable(u} DT -~ ~ykzEat(z,y fl u)}) --.kTf3u(Vegetable(u)AT .\zEat(z.u)l AVu( ~ Vegetable(u)DT ~ .kzEatiz. u:)) "Hiroko wa yasai shika tabe hal" --.\PP.~QQ(H iroke JAT( qu( Vegetable4 uJ/\T,kzEat(z.u J) ,~,Vu, "- Vegetable( uJ DT ~ .kzEat(z.uJD --,kT( =l,J(Vegetable(u}/kTSzEat(z,uU AVu( -, Vegetable(ul S,T -, .\zEat( z,unJhPP(ilirokoj --( 3u( Vegetable( ul/, kPP(HirokoJkzEat(z.u)} /~Vu( ~ Vegetable(ul D.\PPf H irokoJ ~ ,t, zF, a'(z,ul)) --( 3u(Vegetabie(u~A ~.zEat(z.ui(Hiroko)) AVu( -- Vegetable( a J D " kzEat( z,u}( I I irokol)J ~(~u(Vegetable(ut/k Eat( I l iroko,,,)J ' AVu( ~ Vegetable(u) D "~ Eat( Hiroko,ulD Fig. 1. Logical analysis of the setltence (sl0) (s19)Hiroko ga yasai wo sudeni tabe teiru. (Hiroko has already eaten vegetable.) 3x(Vegetable(x)A[terminative-state] Eat'(Hiroko,x,already)). In the above examples, the adverb "ima (now)" is concerned with the "durative" aspect, while "sudeni (already)" is concerned with the "terminative-state" aspect. The argument z of the predicate Eat"'(x,y,z) represents time information. (4) Modal particles As discussed in Section 3 (sentences (s8) and (s9)), the modal particle "wa" occurring simultaneously with negation suggests partial negation. (5) Conjunctive relations Conjunctive relations are related to the scope of modality. If the subordinate clause has the following conjunctive relations represented by (a) the conjunctive particle "te", or (b) a relative noun such as "toki (trine)" or "mae (before)" modified by embedded sentences, the transfer of negation can be predicted as in sentence (s14). Otherwise, the transfer will never occurs as follows : (s20)Taro wa hige wo sot ta ga kaisha e ika nakat ta. (Though Taro shaved his beard, he did not go to the company.) [past]Shave(Taro,beard) A[ past] ~ Go(Taro,Company). (6) Semantic relations between the subordinate clause and the main clause This information is used to determine the scope of negation in the main clause. In the subordinate clause with the conjunctive particle "te", if the event expressed by it is subsidiary for the occurrence of the event in the main clause, the transfer of negation can occur. On the other hand, if the subordinate event is indispensable to the occurrence of the main event, the transfer never occurs. For example, in (s14), since the modifier event Shave(Taro,beard) is a subsidiary event for the occurrence of the main event Go(Taro,Company), the transfer of negation is possible. In the following sentence (s21), however, since the event Go(Taro, Washington) is an indispensable event for the occurrnece of the main event See(Taro,White-House), the transfer ts impossible : (s21)Taro wa Washington e it te White House wo mi nakat ta. (Taro did not see the White House when he went to Washington.) [past]Go(Taro,Washington) A[past] -, See(Taro,the-White-House). 31 6. Modality analysis 6.1 Strategy of the modality analysis Considering the five modality functions defined in Section 3, it is apparent that the logical analysis method alone is not effective for modality analysis. There are three reasons for this : (1) Reference to other expressions is needed to resolve the ambiguity of the modality function, (2) Structural modification occurs when the scope of negation is transferred, (3) Analysis of the implicative meaning sometimes cause the change of logical expression. There remains, however, the problem of taking the individuality of each modality into account. For some kinds of modality, the result of the case analysis or the conjunctive analysis is used to analyze it. These represent the reasons why we propose an analysis method consisting of the following three modules combined with the case analysis and the conjunctive analysis : ( 1)pre-case-analysis : activated before the case analysis, (2)post-case-analysis : activated after the case analysis, (3)post-conjunctive-analysis : activated after the conjunctive analysis. The relationship of these three modules to the case analysis and the conjunctive analysis is shown in Fig. 2. ore-case.analysis : I surface and semantic modification of the case frame f [ case analysis ] post-case-analysis : [ (I} disambiguation of the modality function [ E (2) determination of the scop~ of negation [ (31 addition of the implicative meaning I c°njunctive analysis I post-conju nctive-an alysis : I determinatioa of the scope of the modality in the main clause Fig. 2. Framework of the m,dality analysis 6.2 Algorithms of each sub-analysis (1) Pre-case-analysis The modality whose analysis requires only lexical meaning or which causes a change of the case structure is analysed at this stage. The case frame to be assigned to the predicate is mcdified by utilizing the result of this analysis before starting the case analysis. As for the semantically ambiguous auxiliary verb "reru" or "rareru", its role is only predicted at this stage, because it is also concerned with the modification of the case structure. After case analysis, the plausibility of the prediction is evaluated. The modification of the case frame is as follows : (a) For the "passive" meaning of "reru" or "raxeru" (which causes a surface change to the case structure as mentioned in Section 3.2), the object case of the original case frame is changed into the surface subjective case, and the modality category "passive" is assigned to the meaning structure. If two object cases exist, two possible modifications are performed. (b) With the modality causing a semantic change to the case structure (for the modality function stated in Section 3.3), a new case is added as follows: (bl)For the "causative", "affected-passive", "hope" or "request" meaning : A new agent (e.g. causative-agent / affected-agent) is added, and the case particle of the original subjective case is changed from "ga" to "hi", (b2)With the "benefit" meaning : A beneficiary case is added. The case particle in this case is "hi". Also the modality category corresponding to each meaning (e.g. "causative", "affected- passive") is assigned to the meaning structure. (2) Post-case-analysis The modality whose analysis requires case structure information is analyzed at this stage. This module determines the function of the modality as follows : (a) [f the category of the modality expression is unique, this category is assigned to the meaning :;tructure. (b) if a daemon (a procedure to resolve ambiguities by using heuristics) is attached to the rnodality expression, it performs the three tasks : (bl) disambignating the function of the modality expression, (b2) detcrmining the scope, (b3) adding the implicative meaning. The daemon utilizes the information mentioned in (I) - (4) in Sect, ion 5. For example, a daemon attached to the aspect expression "teiru" works as shown in Fig. 3. (3) Post-conjunctive-analysis Following the conjunctive analysis between the subordinate clause and the main clause, this module is activated to determine whether the modality in the main clause also operates on the subordinate clause. This module utilizes heuristics consisted of all of the 32 Is there a case element (noun phrase or adverb) suggesting "terminative-state" or "durative" or "iterative" aspect? [ no Does "teiru" follow "reru" or ~rarerxl'~. yes ~, I terminative- state aspect ~ yes [ terminative-state [ [ or durative ~no [ or iterative aspect I Is the feature of the predicate "spontaneous~ I no~ , ~y. I state I Fig. 3. Daemon which disambiguates the meaning of the aspect expression "teiru" information presented in Section 5. An example of heuristics which analyze the scope of the auxiliary verb "ta" is shown in Fig. 4. For negation in the main clause, the transfer of negation is considered. Whether or not the modifier event is subsidiary for the occurence of the main event is tested using the semantic relations assigned to the )redicate of the main clause. Is conj unction of the subordinate clause conjunctive particle "te" "to" "ba n or "renyou~chuushi"? and Does the subordinate clause have time information such as time cases? no Jr Jfyes operate time ir~'ormation in the main ~ I . no operation I clause over the subordinate clause Fig 4. Heuristics which analyse the scope of the auxiliary verb "ta" 6.3 Application to Japanese analysis (I) Extended case analysis We have already proposed a method named extended case analysis for Japanese sentences. IvT Input to the extended case analysis is an ordered list of word frames produced by a morphological analysis. The analysis begins to predict a constituent construction of the sentence to be analyzed by utilizing syntactic structure patterns, and then enter into the detail analysis of semantic relations between pairs of the modifier and the modificant by utilizing semantic relation frames. There are four types of the semantic relations, namely, case relation, noun concept relation, embeding relation and conjunctive relation. All of these semantic relations are analyzed in a uniform framework. The both analyses go on iteratively and/or recursively from a small chunk of constituents to large one. Each iteration and recursion executes both the prediction of the syntactic structure and the analysis of semantic structure. The modality analysis is incorporated into those processes. Let us show the modaiity analysis process for the following example sentence : (s22)Niku wa nokot teite, yasai dake ga Kiroko ni tabe rare teita. Meat had remained, and only vegetables had been eaten by Hiroko. At first, it is analysed that this sentence is a complex sentence by utilizing syntactic structure patterns. After semantic structures of the modifier and the main clause are analysed, conjunctive relation between these clauses is analyzed. Now, we show analysis of the main sentence. The following case elements and a predicate are analysed by applying structure patterns before starting case analysis : case1 = "yasai", "ga", "dake", case2 = ~liroko", "ai", predicate = "taberu", "rareru", %eiru", %a', where "dake", "rarern', "teiru", and "ta" are modality exp~'essions. "Hiroko" and "yasai" have semantic categories, [human] and [food] respectively in each word frame. (2) Modification of case frame Case frame is prepared for each meaning of each predicate. An intrinsic case frame for the verb "taberu (eat)" is as follows (Optional cases such as time and place are omitted here) : [the intrinsic case frame of the verb "taberu (eat)"] : Agent -- [human], "ga", Object = [food], ~wo". Each case slot in the case frame is assigned semantic categories and case particles as constraints to be satisfied by the filler. The following alternative case frames produced by modifying the intrinsic frame are also prepared before starting case analysis because of the existence of the auxiliary verb ~rareru" : ["passive" modification of the case frame] : Agent = [human], "hi", Object = [food], "ga", ["affected-passive" modification of the case frame] : Affected-agent - [human], "ga", Agent = [human], "ni", Object - [food], "wo". These three case frames are examined whether each case element in the sentence satisfies constraints. As a result, in this case, "passive" modification case frame is selected as a best matching, and case role of each case element is determined as follows : casel= Object, case2 = Agent. This result is showing that the meaning of ~rareru" is "passive". (3) Determination of meaning of modality Modality by modal particles in case elements and attxiHary verbs are analyzed. Analysis of "teiru" is :33 performed by the heuristics shown in Fig. 3, where the meaning is determined as "terminative-state" judging from the fact that "teiru" follows "raxeru". The meaning of the modal particle "dake" is multiple, that is, "limitation" and "degree". In this case, "limitation" is selected by heuristics. (4) Determination of scope of modality in the main clause After conjunctive analysis between the modifier and the main clause, scope of the auxiliary verb "ta" in the main clause is analyzed. Using heuristics shown in Fig. 4, it is analyzed that "ta" also operates on the subordinate clause. In a result, the meaning structure of (s22) is obtained as follows : 3x((Meat(x)A[past][terminative-state ]Remain(x)) A3x((Vegetable(x) A[past][terminative-state]Eat(Hiroko,x)) AVx(( -- Vegetable(x) ~-, [pastl[terminative-state]Eat(Hiroko,x)) A{Focus(x)}). An English sentence corresponding to this semantic structure is shown in (s22). 6.4 Virture of modality analysis We show contributions of modality analysis to understanding and quality of translation for the following example sentences. (s23) Densha wa senro no ue shika hashiru kotogadeki na_Ai ga, watashi ga kinou eiga de mita densha wa sofa wo tobu kotomodeki ta. Though a train can run only on a railroad, the train [ saw in a movie yesterday could also fly. (s24) Anata wa densha ga sora wo tobu kotogadekiru to omoi masu ka. Do you think that a train can fly? (1) [speech act] As shown in (s24), modality contains much information concerning speech act (question, command, guess, intention, etc.). In conversational systems such as qustion answering systems, these meaning can be used for selecting apropriate reactions. (2) [type of object] Analysis results of aspect or tense are used for determining the type of objects. The subordinate clause of (s23) describes a general character of'densha (trmn)", and the first occurrence of "densha" denotes a generic object. On the other hand, the second occurrence of "denaha" is modified by an embedded sentence, and "densha" denotes a specific object which "I saw in a movie yesterday". Like this, if the character of the event is analysed by the analysis of aspect or tense, the character of the objects can be specified. (3) [translation] As shown in the translated sentences in (s23) and (s24), results of the modality analysis are clearly realized in quality of translated sentences. In these sentences, modality such as "limitation", "negation", "ability", "past", "quetion" appears. 7. Conclusion We proposed an analysis method for Japanese modality. In this purpose, we classified the meaning of modality into four categories, and then defined five modality functions which characterize the role of modality. By employing logical expressions to represent the meaning structure, we could effectively specify the modality function. Though logical expression has the same expressive power as frames or semantic networks, a more concise semantic representation can be realized by this method. Although we dealt with the modality analysis restricted within the scope of one sentence in this paper, we must investigate the effect of discourse information on the analysis of modality in the future. We have applied this modality analysis method to the Japanese sentence analysis in the Japanese- English experimental machine translation system, LUTE.IV! References [I] Dowty, D. R., R. E. Wall, and S. Peters : Introduction to Montague Semantics, 1981. [2] Fillmore, C. J. : Toward a Modern Theory Qf Case and Other Articles, Japanese edition, 1975. [3]Karttunen, L. and S. Peters : Conventional Ixnplicature, "Syntax and Semantics" ii, ed. by C.- K. Oh and D. A. Dinneen, 1979. [4] Kubo, S. : A Study of Japanese Adverbial Particles in Montague Grammar, "Linguistic Journal of Korea", vol.7, no.2, 1982. [5] Keenan, E. : Negative Coreference : Generalizing Quantification for Natural Language, "Formal Semantics and Pragrnatics for Natural Languages", ed. by F. Guenthner and S. J. Schmidt, 1979. [6] Nakau, M. : Tense, Aspect, and Modality, "Syntax and Semantics" 5, ed. by M. Shibatani, 1978. [7] Shimazu, A., S. Naito, and H. Nornura : Japanese Language Semantic Analyser based on an Extended Case Frame Model, Proc. of 8th International Joint Conference on Artificial Intelligence, 1983. 34
1985
4
GRAMMAR VIEWED AS A FUNCTIONING PART OF A COGNITIVE SYSTEM Helen M. Gigley Department of Computer Science University of New Hampshire Durham, NH 03824 ABSTRACT How can grammar be viewed as a functional part of a cognitive system) Given a neural basis for the processing control paradigm of language performance, what roles does 'Sgrammar" play? Is there evidence to suggest that grammatical pro- cessing can be independent from other aspects of language processing? This paper will focus on these issues and suggest answers within the context of one com- putational solution. The example model of sen- tence comprehension, HOPE, is intended to demon- strate both representational considerations for a grammar within such a system as well as to illus- trate that by interpreting a grammar as a feedback control mechanism of a "neural-like" process, additional insights into language processing can be obtained. 1. Introduction The role of grammar in defining cognitive models that are neurally plausible and psycho- logically valid will be the focus of this paper. While inguistic theory greatly influences the actual representation that is included in any such model, there are vast differences in how any grammar selected is "processed" within a "natural computation" paradigm. The processing does not grow trees explicitly; it does not transform trees explicitly; nor does it move constituents. In this type of model, a grammar is an ex- plicit encoded representation that coordinates the integrated parallel process. It provides the interfaces between parallel processes that can be interpreted within semantic and syntactic levels separately. It furthermore acts as a "conductor" of a time-synchronized process. Aspects of how a grammar might be processed within a cognitive view of sentence comprehension will be demonstrated within an implemented model of such processing, HOPE (Gigley, 1981; 1982a; 1982b; 1983; 1984; 1985). This view of grammatical "process" sug- gests that neural processing should be included as a basis for defining what is universal in lan- guage. 2. Background There are currently several approaches to developing cognitive models of linguistic function (Cottrell, 1984; Cottrell and Small, 1983; Gigley, 1981; 1982a; 1982b; 1983; 1984; 1985; Small, Cottrell and Shastri, 1982; Waltz and Pollack, in press). These models include assumptions about memory processing within a spreading activation framework (Collins and Loftus, 1975; Hinton, 1981; Quillian, 1968/1980), and a parallel, interactive control paradigm for the processing. They differ in the explicit implementations of these theories and the degree to which they claim to be psycho- logically valid. Computational Neurolinguistics (CN), first suggested as a problem domain by Arbib and Caplan (1979), is an Artificial Intelligence (AI) ap- proach to modelling neural processes which sub- serve natural language performance. As CN has developed, such models are highly constrained by behavioral evidence, both normal and pathological. CN provides a framework for defining cognitive models of natural language performance of behavior that includes claims of validity at two levels, the natural computation or neural-like processing level, and at the system result or behavioral level. Using one implementation of a CN model, HOPE (Gigley, 1981; 1982a; 1982b; 1983) a model of single sentence comprehension, the remainder of the paper will illustrate how the role of grammar can be integrated into the design of such a model. It will emphasize the importance of the parallel control assumptions in constraining the repre- sentation in which the grammar is encoded. It will demonstrate how the grammar contributes to control the coordination of the parallel, asyn- chronous processes included in the model. The HOPE model is chosen explicitly because the underlying assumptions in its design are intended to be psychologically valid on two levels, while the other referenced models do not make such claims. The complete model is discussed in Gigley (1982a; 1982b; 1983) and will be sum- marized here to illustrate the role of the grammar in its function. The suggested implications and goals for including neurophysiological evidence in designing such models have been discussed else- 324 where in Lavorel and Gigley (1983) and will be included only as they relate to the role and function of the grammar. 2. I Summary of Included Knowledge and its Repre- sentation Types of representations included in the HOPE model, phonetic, categorially accessed meanings, grammar, and pragmatic or local context, receive support as separately definable knowledge within studies of aphasia. There is a vast literature concerning what aspects of language are indepen- dently affected in aphasia that has been used as a basis for deciding these representations. (See Gigley, 1982b for complete documentation.) Information that is defined within the HOPE model is presented at a phonological level as phonetic representations of words (a stub for a similar interactive process underlying word re- cognition). Information at the word meaning level is represented as multiple representations, each of which has a designated syntactic category type and orthographic spelling associate to represent the phonetic word's meaning (also a stub). The grammatical representation has two components. One is strictly a local representation of the grammatical structural co-occurrences in normal language. The other is a functional repre- sentation, related to interpretation, that is unique for each syntactic category type. Please note that ~ ~ not used in the strictest sense of its use wlthln a t_~ semantic system. ~TIF be des~ l~n detaiaT'-Ta't-e~T. Finally, the pragmatic interpretation is assumed to reflect the sentential context of the utterance. Each piece of information is a thresholding device with memory. Associational interconnec- tions are made by using an hierarchical graph which includes a hypergraph facility that permits simultaneous multiple interpretations for any active information in the process. Using this concept, an active node can be ambiguous, repre- senting information that is shared among many interpretations. Sentence comprehension is viewed as the resolution of the ambiguities that are activated over the time course of the process. Within our implementation, graphs can repre- sent an aspect of the problem representation by name. Any name can be attached to a node, or an edge, or a space (hypergraph) of the graph. There are some naming constraints required due to the graph processing system implementation, but they do not affect the conceptual representation on which the encoding of the cognitive linguistic knowledge relies. Any name can have multiple meanings asso- ciated with it. These meanings can be interpreted differently by viewing each space in which the name is referencea as a different viewpoint for the same information. This means that whenever the name is the same for any information, it is indeed the same information, although it may mean several things simultaneously. An example related to the grammatical representation is that the syntactic category aspect of each meaning of a phonetic word is also a part of the grammatical representation where it makes associations with other syntactic categories. The associations visible in the grammatical representation and interpreted as grammatical "meanings" are not viewable within the phonetic word meaning per- spective. However, any information associated with a name, for instance, an activity value, is viewable from any spaces in which the name exists. This means that any interpreted meaning associated with a name can only be evaluated within the context, or contexts, in which the name occurs. Meaning for any name is contextually evaluable. The explicit meaning within any space depends on the rest of the state of the space, which furthermore depends on what previous processing has occurred to affect the state of that space. 2.2 Summary of the Processing Paradigm The development of CN models emphasizes process. A primary assumption of this approach is that neural-like computations must be included in models which attempt to simulate any cognitive behavior (Of Lavorel and Gigley, 1983), speci- fically natural language processing in this case. Furthermore, CN includes the assumption that time is a critical factor in neural processin~ mechanlsms an-~-d--that it can be a slgnlflcant factor in language behavior in its degraded or "lesioned" state. Simulation of a process paradigm for natural language comprehension in HOPE is achieved by incorporating a neurally plausible control that is internal to the processing mechanism. There is no external process that decides which path or pro- cess to execute next based on the current state of the solution space. The process is time-locked; at each process time interval. There are six types of serial-order computations that can occur. They apply to all representation viewpoints or spaces simultaneously, and uniformly. Threshold firing can affect multiple spaces, and has a local effect within the space of firing. Each of these serial-order computations is intended to represent an aspect of "natural compu- tation" as defined in Lavorel and Gigley, 1983. A natural computation, as opposed to a mechanistic one, is a "computation" that is achieved by neural processing components, such as threshold devices and energy transducers, rather than by components such as are found in digital devices. The most important aspect of the control is that all of the serial order computations can occur simultaneously and can affect any info'~m-atTo~-'that has been defined in the instantiated model. Processing control is achieved using activity values on information. As there is no preset context in the current implementation, all in- formation initially has a resting activity value. This activity value can be modified over time depending on the sentential input. Furthermore, there is an automatic activity decay scheme in- tended to represent memory processing which is 325 based on the state of the information, whether it has reached threshold and fired or not. Activity is propagated in a fixed-time scheme to all "connected" aspects of the meaning of the words by spreading activation (Collins and Loftus, 1975; 1983; Hinton, 1981; Quillian, 1968/1980). Simultaneously, information interacts asynchronously due to threshold firing. A state of threshold firing is realized as a result of summed inputs over time that are the result of the fixed-time spreading activation, other threshold firing or memory decay effects in combination. The time course of new information introduction, which initiates activity spread and automatic memory decay is parameterized due to the under- lying reason for designing such models (Gigley, 1982b; 1983; 1985). The exact serial-order processes that occur at any time-slice of the process depend on the "current state" of the global information; they are context dependent. The serial-order processes include: (1) NEW-WORD-RECOGNITION: Introduction of the next phonetically recognized word in the sentence. (2) DECAY: Automatic memory decay exponentially re-e'du'ces the activity of all active informa- tion that does not receive additional input. It is an important part of the neural pro- cesses that occur during memory processing. (3) REFRACTORY-STATE-ACTIVATION: ~-- _ -~ An auto- matic change of state that occurs after active information has reached threshold and fired. In this state, the information can not affect or be affected by other informa- tion in the system. (4) POST-REFRACTORY-STATE-ACTIVATION: ~ An automatic change of state which all fired in- formation enters after it has existed in the REFRACTORY-STATE. The decay rate is dif- ferent than before firing, although still exponential. (5) MEANING-PROPAGATION: Fixed-time spreading activation to the distributed parts of recognized words' meanings. (6) FIRING-INFORMATION-PROPAGATION: Asynchronous activity propagation that occurs when information reaches threshold and fires. It can be INHIBITORY and EXCITATORY in its effect. INTERPRETATION results in activation of a pragmatic representation of a dis- ambiguated word meaning. Processes (2) through (6) are applicable to all active information in the global representa- tion, while process (1) provides the interface with the external input of the sentence to be understood. The state of the grammar representa- tion affects inhibitory and excitatory firing propagation, as well as coordinates "meaning" interpretation with on-going "input" processing. It is in the interaction of the results of these asychronous processes that the process of compre- hension is simulated. 3. The Role of a Grammar in Cognitive Processing Models Within our behavioral approach to studying natural language processing, several considera- tions must be met. Justification must be made for separate representations of information and, when- ever possible, neural processing support must be found. 3.1 Evidence for a Separate Representation of Grammar Neurolinguistic and psycholinguistic evidence supports a separately interpretable representation for a grammar. The neurolinguistic literature demonstrates that the grammar can be affected in isolation from other aspects of language function. (Cf Studies of agrammatic and Broca's aphasia as described in Goodenough, Zurif, and Weintraub, 1977; Goodglass, 1976; Goodglass and Berko, 1960; Goodglass, Gleason, Bernholtz, and Hyde, 1970; Zurif and Blumstein, 1978). In the HOPE model, this separation is achieved by including all relevant grammatical information within a space or hypergraph called the grammar. The associated interpretation func- tions for each grammatical type provide the in- terface with the pragmatic representation. Before describing the nature of the local representation of the currently included grammar, a brief dis- cussion of the structure of the grammar and the role of the grammar in the global nature of the control must be given. 3.2 The Local Representation of the Grammar The grammar space contains the locally de- fined grammar for the process. The current model defined within the HOPE system includes a form of a Categorial Grammar (Ajdukiewicz, 1935; Lewis, 1972). Although the original use of the grammar is not heeded, the relationship that ensues be- tween a well defined syntactic form and a "final state" meaning representation is borrowed. Validity of the "final state" meaning is not the issue. Final state here means, at the end of the process. As previously mentioned, typed semantics is also not rigidly enforced in the current model. HOPE a11ows one to define a lexicon within user selecte~ syntactic types, and a11ows one to define a suitable grammar of the selected types in the prescribed form as well. The grammar may be defined to suit the aspects of language per- formance being modelled. There are two parts to the grammatical aspect of the HOPE model. One is a form of the struc- tural co-occurrences that constitute context free phrase structure representations of grammar. However, these specifications only make one "con- stituent" predictions for subsequent input types where each constituent may have additional sub- structure. 326 Predictions at this time do not spread to substructures because of the "time" factor between computational updates that is used. A spread to substructures will require a refinement in time- sequence specifications. The second aspect of the representation is an interpretation function, for each specified syn- tactic type in the grammar definition. Each interpretation function is activated when a word meaning fires for whatever reason. The inter- pretation function represents a firing activation level for the "concept" of the meaning and in- cludes its syntactic form. For this reason, each syntactic form has a unique functional description that uses the instantiated meaning that is firing (presently, the spelling notation) to activate structures and relations in the pragmatic space that represent the "meaning understood." Each function activates different types of structures and relations, some of which depend on prior activation of other types to complete the process correctly. These functions can trigger semantic feature checks and morphological matches where appropriate. Syntactic types in the HOPE system are of two forms, lexical and derived. A lexical cateqory te~xle is one which can be a category type of a cal item. A derived cate_~o type is one which is "composed.-a~"-~erlved category types represent the occurrence of proper "meaning" interpretation in the pragmatic space. The current represented grammar in HOPE contains the following lexical categories: OET for determiner, ENOCONT for end of sentence in- tonation, NOUN for common noun, PAUSE for end of clause intonation, TERM for proper nouns, VIP for intrasitive verb, VTP for transitive verb. As is seen, the lexical "categories" relate "grammatical" structure to aspects of the input signal, hence in this sense ENDCONT and PAUSE are categories. The derived categories in the current in- stantiated model include: SENTENCE, representing a composition of agent determination of a TERM for an appropriate verb phrase, TERM, representing a composed designated DET NOUN referent, and VIP, representing the state of proper composition of a TERM object with a VTP, transitive verb sense. TERM and VIP are examples of category types in this model that are both lexical and derived. "Rules" in the independently represented grammar are intended to represent what is con- sidered in HOPE as the "syntactic meaning" of the respective category. They are expressed as local interactions, not global ones. Global effects of grammar, the concern of many rule based systems, can only be studied as the result of the time sequenced processing of an "input". Table l contains examples of "rules" in our current model. Other categories may be defined; other lexical items defined; other interpretations defined within the HOPE paradigm. Table l: Category specification DET: = TERM / NOUN VIP: = SENTENCE / ENDCOUNT VTP: = VIP / TERM In Table l, the "numerator" of the specifi- cation is the derived type which results from composition of the "denominator" type interpre- tation with the interpretation of the category whose meaning is being defined. For example, DETerminer, the defined category, combines with a NOUN category type to produce an interpretation which is a TERM type. When a category occurs in more than one place, any interpretation and re- sultant activity propagation of the correct type may affect any "rule" in which it appears. Ef- fects are in parallel and simultaneous. Inter- pretation can be blocked for composition by un- successful matches on designated attribute fea- tures or morphological inconsistencies. Successful completion of function execution results in a pragmatic representation that will either fire immediately if it is non-compositional or in one time delay if the "meaning" is composed. Firing is of the syntactic type that represents the correctly "understood" entity. This "top- down" firing produces feedback activity whose effect is "directed" by the state of the grammar, space, i.e. what information is active and its degree of activity. The nature of the research in its present state has not addressed the generality of the lin- guistic structures it can process. This is left to future work. The concentration at this time is on initial validation of model produced simulation results before any additional effort on expansion is undertaken. With so many assumptions included in the design of such models, initial assessment of the model's performance was felt to be more critical than its immediate expansion along any of the possible dimensions previously noted as stubs. The initial investigation is also intended to suggest how to expand these stubs. 3.3 The Grammar as a Feedback Control System The role of the grammar as it is encoded in HOPE is to function in a systems theoretic manner. It provides the representation of the feedforward, or prediction, and feedback, or confirmation interconnections among syntactic entities which have produced appropriate entities as pragmatic interpretations. It coordinates the serial or- dered expectations, with what actually occurs in the input signal, with any suitable meaning in- terpretations that can affect the state of the process in a top-down sense. It represents the interface between the serial-order input and the parallel functioning system. Grammatical categories are activated via spreading activation that is the result of word meaning activation as words are recognized. Firing of an instance of a grammatical type acti- vates that type's interpretation function which 327 results in the appropriate pragmatic interpreta- tion for it, including the specific meaning that was fired. Interpretation function~ are defined for syntactic types not specific items within each type. Each type interpretation has one form with specific lexical "parameters"L A11 nouns are interpreted the same; a11 intransitive verbs the same. What differs in interpretation is the attributes that occur for the lexical item being interpreted. These also affect the interpreta- tion. The meaning representation for a11 instances of a certain category have the same meta- structure. General nouns (NOUN) are presently depicted as nodes in the pragmatic space. The node name is the "noun meaning." For transitive verbs, nodes named as the verb stem are produced with a directed edge designating the appropriate TERM category as agent. The effect of firing of a grammatical category can trigger feature propaga- tions or morphological checks depending on which category fires and the current pragmatic state of the on-going interpretation. Successful interpretation results in thres- hold firing of the "meaning." This "meaning" has a syntactic component which can affect grammatical representations that have an activity value. This process is time constrained depending on whether the syntactic type of the interpretation is lexi- cal or derived. 3.4 Spreading Activation of the Grammar Input to HOPE is time-sequenced, as phone- tically recognized words, (a stub for future development). Each phonetic "word" activates all of its associated meanings. (HOPE uses homophones to access meanings.) Using spreading activation, the syntactic category aspect of each meaning in turn activates the category's meaning in the grammar space representation. Part of the grammatical meaning of any syn- tactic category is the meaning category that is expected to follow it in the input. The other part of the grammatical meaning for any category type, is the type it can derive by its correct interpretation within the context of a sentence. Because each of these predictions and interpreta- tions are encoded locally, one can observe inter- actions among the global "rules" of the grammar during the processing. This is one of the moti- vating factors for designing the neurally moti- vated model, as it provides insights into how processing deviations can produce degraded lan- guage performance. 3.5 Grammar State and Its Effect on Processing Lexical category types have different effects than derived ones with respect to timing and pragmatic interpretation. However, both lexical and derived category types have the same effect on the subsequent input. This section will describe the currently represented grammar and provide example processing effects that arise due to its interactive activation. Through spreading activation, the state of the syntactic types represented in the grammar affects subsequent category biases in the input (feedforward) and on-going interpretation or disambiguation of previously "heard" words (feed- back). The order of processing of the input appears to be both right to left and left to right. Furthermore, each syntactic type, on firing, triggers the interpretation function that is particular to each syntactic type. Rules, as previously discussed, are activated during processing via spreading activation. Each recognized word activates all "meanings" in parallel. Each "meaning" contains a syntactic type. Spreading activation along "syntactic type associates" (defined in the grammar) predictively activates the "expected" subsequent categories in the input. In the HOPE model, spreading activation currently propagates this activity which is not at the "threshold" level. Propagated activity due to firing is always a parameter controlled percentage of the above threshold activity and in the pre- sently "tuned" simulations always propagates a value that is under threshold by a substantial amount. All activations occur in parallel and affect subsequent "meaning" activities of later words in the sentence. In addition, when composition succeeds (or pragmatic interpretation is finalized) the state of the grammar is affected to produce or changes in category aspects of all active meanings in the process. The remainder of this section will present instances of the feedforward and feedback effects of the grammar during simulation runs to illus- trate the role of grammar in the process. The last example will illustrate how a change in state of the grammar representation can affect the process. All examples will use snapshots of the sentence: "The boy saw the building." This is input phonetically as: (TH-UH B-OY S-AO TH-UH B-IH-L-D-IH-NG). 3.5.1 An Example of Feedforward, Feedback, and Composition This example will illustrate the feedforward activation of NOUN for the DETerminer grammatical meaning during interpretation of the initial TERM or noun phrase of the sentence. At1 figures are labelled to correspond with the text. Each in- terval is labelled at the top, tl, t2, etc. The size of each node reflects the activity level, larger means more active. Threshold firing is represented as F~ Other changes of state that affect memory are are denoted (~ and~ and are shown for coa~leteness. They indicate serial-order changes of state described earlier, but are not critical to the following discussion. 328 II l| I$ 14 III r-a-,boy .... ~'z~i ( i ) / ~/ . ,' ~, ,~ ',~ l 1 'P / ~ t ,v (q) --', .' / PNO IIIT|~o I i (h) TH-UH B'<)¥ S-AO Figure 1 On "hearing" /TII-UH/ (a) at tl, the repre- sented meaning "OET-the" is activated as the only meaning (b). At the next time interval, t2, the meaning of OET is activated - which spreads acti- vity to what OET predicts, a NOUN (c). A11 NOUN meanings are activated by spread in the next time interval, t3, in combination with new activity. This produces a threshold which "fires" the "meaning" selected (d). At completion of inter- pretation (e), in t4, feedback occurs to a11 instances of meaning with category types in the grammar associated as predictors of the inter- preted category. OET is the only active category that predicts NOUN so all active meanings of type OET will receive the feedback activity. In Figure I, OET-the is ready to fire (f). The increase or decrease in activity of a11 related types, competitive ones for the meaning (inhibitory) (g) as well as syntactic ones for composition (ex- citatory) (f) is propagated at the next interval after firing, shown in t3 and t4. In tS, /S-AO/ enters the process (h) with its associated mean- ings. The effect of OET-the firing is also seen in t5 where the compositional TERM is activated (i). NOTE: DETerminers are not physically represented as entities in the pragmatic space. Their meaning is only functional and has a "semantic" combosi- tional effect. Here 'tne' requires a "one and only one" NOUN that is unattached as a TERM to successfully denote the meaning of the boy as a proper TERM (i). As this is a compositional "meaning", the firing will affect t6. Because there is no active TERM prediction in the grammar space, and no competitive meanings, the top-down effect in t6 will be null and is not shown. The next exa~le will illustrate a top-down effect following TERM composition. 3.5.2 An Example of Feedforward, Feedback, Composition, and Subsequent Feedback This ex~,nple, shown in Figure 2, will be very similar to the previous one. Only active informa- tion discussed is shown as otherwise the figures become cluttered. The grammar is in a different state in Figure Z when successful TERM interpre- tation occurs at all (a). This is due to the activation at tg of all meanings of B-UI-L-O-IH-NG (b). The VTP meanings of /S-AO/ and then /B-UI-L-O-IH-NG/ make a TERM prediction shown as it remains in tlO (c). After composition of "the building" (a) shown in tel, TERM will fire top- down. It subsequently, through feedback,- acti- vates all meanings of the category type which predicted the TERM, all VTP type meanings in this case. This excitatory feedback, raises both VTP meanings in t12, for saw (d), as well as, building (e). However, the activity level of "building does" not reach threshold because of previous disembiguation of its NOUN meaning. When the VTP meaning, saw, fires (d) in t]2, additional comoosition occurs. The VTP interpretation composes with a suitable TERM (a), one which matches feature attribute specifications of saw, 329 / .tl 0 I I I " PRJU3~TZC bu£1dinq f / /.,::.--'!: i l l (:12 Figure 2. t 'O "" ¢ • ~ " -"- ~',--, I v ,_'.'" ~ ,,_., * # 0=- ~d~------------~/~"--~"-- ----." "~ L " i -"- "" ~" "~ " ~ ~ I " ~ "--\ L~WJ,h -- TEIm (b) S-AO ~ ~ ., 8-ZH-L-O-Zn-,~ _-- .. . . . . .('o "~,- . . . . %_. f -- ,m~ (e) um .L • ) m, tl 9.Z t3 t4 P#AOMAI~C: S-A~ Figure 3. ~I-Ull 330 to produce a VIP type at t13 this will sub- sequently produce feedback at t14, Neither are shown. 3.5.3 Effect of a Oifferent Grammar State on Processing The final example, Figure 3, will use one of the "lesion" simulations using HOPE. The grammar representations remain intact. This example will present the understanding of the first three words of the sentence under the condition that they are presented faster than the system is processing. Effectively, a slow-down of activation spread to the grammar is assumed. Figures such as Figure 1 and Figure 3 can be compared a to suggest possible language performance problems and to gain insights into their possible causes. In Figure 3, when /TH-UH/ is introduced at tl Ca), all meanings are activated (b) as in Figure 1. The spread of activation to the grammar occurs in t2 (c). However, the second word, /8-OY/ (d) is '*heard" at the same time as the activity reaches the grammar. The predictive activation spread From the grammar takes effect at t3, when the new word /S-N)/ (e) is "heard." The immediate result is that the NOUN meaning, saw (f), fires and is interpreted at t4 (g). This shows in a very simple case, now the grammar can affect the processing states of an interactive parallel model. Timing can be seen to be critical. There are many more critical results that occur in such "lesion" simulations that better i11ustrate such grammatical affects, how- ever they are very difficult to present in a static form, other than within a behaviorial analysis of the overall linguistic performance of the entire made1. This is considered an hypo- thesized patient profile and is described in Gigley (1985). Other examDles of processing are presented in detail in Gigley (lg82b; 1983). 3.6 Summary The above figures present a very simple examole of the interactive process. It is hoped that they provide an idea of the interactions and feedback, feedfor~ard processing that is cooP- dinated by the state of the grammar. Any pre- diction in the grammar that is not sufficiently active affects the process. Any decay that ac- cidently reduces a grammatical aspect can affect the process. The timing of activation, the cate- gorial content and the interactions between in- terpretation and prediction are imbortant Factors when one considers grammar as part of a func- tioning ~ynamic system. Finally, the Categorial Grammar is one form of a Context-Free (CF) grammar which provides a suitable integration of syntactic and semantic processing. In addition, it has been used in many studies of English so that instances of gr~ars sufficiently defined for the current implementa- tion level of processing could be found. Other forms of grammar, such as Lexical-Functional Grammar (Kaolan and Bresnan, 1982) or Generelized Phrase Structure Grammar (Gazdar, 1982; 1983) could be edually suitable. The criteria to be met all that they can be encoded as predictive mechanisms, not necessarily unamOiguous or deterministic, and also that they specify constraints on compositionality. The composition depends on adequate definition of interpretation constraints to assure that it is "computed" properly or else suitably marked for its deviation. 4. Conclusion HOPE provides evidence for how one can view a grammar as an integrated part of a neuraIly- motivated processing model that is psychologically valid. ~uitable constraints on grammatical form that are relevant for using any grammar in the CN context are t~at the grammar make serial predic- tions and provide the synchronization information to coordinate toD-down effects of interpretation with the on-going process. This type of model suggests that universals of language are inseparable from how the are computed. Universals of language may only be definable within neural substrata and their pro- cesses. Furthermore, if this view of linguistic universals holds, then grammar becomes a control representation that synchronizes the kinds of signals that occur and when they get propagated. The states of the grammar in this suggested view of grammatical function are a form of the rewrite rules that are the focus of much linguistic theory. A neurally motivated processing paradigm for natural language processing, demonstrates now one can view an integrated process for language that employs integrated syntactic and semantic pro- cessing which relies on a suitable grammatical form that coordinates the processes. S. Acknowledgements The initial development of the reported research was supported by an Alfred P. Sloan Foundation Grant for "A Training Program in Cog- nitive Science" at the University of Massachusetts at Amherst. Continuing development is subported through a Biomedical Research Support Grant at the University of New Hamoshire. 6. References Ajdukiewicz, O. Oie Syntaktische Konnexitat, 1935. T.anslated as "Syntactic Connection" in Polish Loqic, S. McCall, Oxford, 1967, 207-231. Arbib, H.A. and Caplan, O. Neurolinguistics Must Be Com!Dutational. Behavioral and Brain Sciences, 1979, 2, 449-483. Collins, A.M., and Loftus, E.A. A spreading activation . theory of semantic processing. Psycholo2ical Review, 1975, 82:6, 407-428. 331 Cottre11, G.W. and Small, S.L. A Connec- tionist Scheme for Hodelling Word Sense Oisam- biguation. Cognition and Brain Theory, 1983, 6:1, 89-120. Cottrell, G.W. A model of Lexical Access of Ampiguous Words. Proceedings of AAAI -- 1984. Gazdar, G. Phrase Structure Grammar. In P. Jacobson and G. Pullum (eds.), The Nature of Syntactic Representation. Reide,T~- ~ch~, 1982. Gazdar, G. Phrase Structure Grammars and Natural Languages. Proceedings of the International Joint Conference on A'~tificia| Intelligence. Kar"~T~uhe, west German, 1983, Gigley, H.M. Neurolinguistically Based Hodeling of Natural Language Processing. Paper presente~ at the ACL Session of the Linguistic Society of American Heeting, New York, Oeceed~er, 1981. G|gley, H.M. A computational neurolin~uistic approach to process~nq models of sentence hension. "COINS Technic~o~SZ-'Z-,-~,Univers~ty o-'~-Ha-ssachusetts, Amherst, 1982. Gigley, H.M. Neurolin~uistically constrained simultation of sentence comprehension: [nteqrat- ~.q ar~ific~]~~ence a~ ~_~ai~ theol. h.O. Oissertation, University of Massachusetts, Amherst, 1982b. Gigley, H.M. HOPE -- AI and the Oynamic Process of Lanaguage Behavior. Coqnition and Brain Theo~, 1983, 6, 1. Gigley, H.M. Computational Neurolinguis- tics - What is it all about? Proceedinqs of IJCAI 855, Los Angeles, to appear. Gigley, H.H. Fron HOPE en I'ESPERANCE -- On the Role of Computational Neuralinguistics in Cross-Language Studies. Proceedings of COLING 84. Stanfor~ University, July, 1984. Goodenough, C., Zurif, E. and Weintraub, S. Aphasic's attention to grammatical morphemes, LanquaQe end Speech, 1977, 11-19. Goodglass, H. Agrammatism. In H. Whitaker and H.A. Whitaker (eds.) Studies in Neurolin~uis- tics, :101,. ~, Academic Press,-s,i',i'~.-~37'-ZSO. Goodglass, H. and BerKo, J. Agrammatism and inflectional mOrl~hology in English. Journal of Speech an~Hearin~ Research, 1960, 3, 257~. Goodglas~, H., Gleason, J., Bernhoitz, N. and Hyde, M. Some Linguistic ;tructures in the Soeech of a Broca's Aonasic. Corte~, 1970, 8, 191-Z12. Hinton, G.E. Implementing 5e~lntic Nets in Parallel Hat,are. In G.E. Hinton and J.A. Anaerson (eds.), Parallel Nodels of Associative ~. . Lawrence-~'rToaum ~ate~ Publishers, Kaplan, R.M. and Sresnan, J. Lexical- Functional Grammar: A Fomal System for Gr~a- matical ReDresentation. In J. Bresnan (ed.), The Hental" Representation of Grammatical RelationS. M-'rT-P'~ess, 1982. Lavorel, P.M. and Gigley, H.H. Elements pour une theorie generale des machines inte]ligentes. [ntellectica, 1983, 7, 20-38. Lewis, O. General Semantics. In Oavidson and Harmon (eds.), Semantics of Natural Lanquaqe, 1972, 169-218. Quillian, . M.R. Semantic Memory. In H. Hinsky (ed.), Semantic Information Processinq. CaLmDri~ge, Ha.: ~ s , 1980. Small, S., Cottrell, G., and Shastri, L. Toward connectionist parsing. Proceedings of the National Conference on Artificial Inte]liqence, ~ g h , PA: 1982,~'8-20. Waltz, O. and Pollack, J. Massively Parallel Parsing: A Strongly Interactive Hodel of Natural Language Interpretation. Cognitive Science. In press. Zurif, E.B. and Blumstein, S.E. Language and the Brain. In M. Halle, J. 8resnan, and G. Mtller, (eds.), Linquistic Theor~ and Psycholoqical Rea]it~. MIT Press, 1978. i 332
1985
40
UNIVERSAI,ITY AND INI)IVll)UAi,ITY: TttI,~ INTI,~RACTION O1,' NOUN I'IIRASI,~ I)I,"I'I,~I,tMINi,~RS IN COI'UI,AR CI,AUSF, S .h)lm C. M:tllery Political Science I)eparmtcnt ~md Artificml I,tclligence I.aboratory Massa,chusetts Institute of Technology 545 Technology Square.N E43-797 Cambridge, MA 02139, USA Arpanet: JCMA at MI'F-MC Ahstract This paper presents ',,n iml~lcmcntcd thct~ry fnr quanlifying noun phr:.i.,.;cs in clausc.s ctmt:mting ct)pulnr verbs (e.g.. "he" ~md "bcCOIllC'). Ih(~ccctling fr()nl Icccnt thcorcticnl work I)y Jackcn(Ioll[ I')X31. this c(,uputati(,tal theory recognizes the dependence ol" the quantification dccisit)n tm the dcl~nitcness, in(lclhfitcncss, or cla'~nncs.s hi" I)olh the subject and ()hjcct of c(apulnr verbs irt Fnglish. J~lckcndofl's intuition al')out Ihc qtmntil]c:~tional interdependence of suhject :tnd ohjcct Itz~s hccn imported fn,n his broader cognitive thc()ry and rel'ormulated ~,tltin a ct)nstraint propagation iizmtcwt)rk, I-'xtcnsit,ts reported here includt: the additi,m of mort: :,ctive determiners, the expansion of determiner categories, and the tre~ttmcnt of dispktccd objects. A further linding is th::t qu:Httilicational constraints may prop~g:~tc across soine clausM boundaries. The algorithm is used by the I~.EI.AI'US Natural I.:mguage Understanding System (h,ring a phase of analysis that posts constraints to produce zt 'constraint tree.' This phase comes ~fter crcz~lit)n of synt~tctic deep structure zntd bcfure ~cntcntktl rcl'Clcltcc ill a semantic-network ntodcl. Incorp(~,'ation ()f the qtmntific~tion algorithm in a larger system that parses ~cntcnccs z,nd builds semantic models from thcnt makes RE/.A'FUS ~hle to acquire taxonomic and identity inlormatit)n from text. Introduction the qtmntilicz, tion of noun phrases. determining their tmiversulity or individuality, ix critical It~r the ~,utomatic acquisition of taxonomic and ~dcntity intbrm:ttion from natural language sentences. Automatic :~cquisition can convert ordinary texts into sources of tzL,~onomic and identity information lior use by learning and reasoning programs in artificial intelligence. Such information can also find use in efforts to develop setectiun restrictions from Icxical sources. Of course, proper qu~mtific:~tion of noun phrases also plays a key role in computer programs that cnde~wor to understand natural language. The theory for computing the qtmntilicational status ()1" noun phr:~scs fi)r tile czt.',e ,fl" copular verbs (e.g.. "he' and "become') wzus inspired hy recent tho)rcticzd w~rk ,fl' .I;~ckt'nthflT [1~)X31 . .l~mkcn(ltdl' n,~lcd that qtl;llltiliC:.lliOn ,d" n~ itlll i/tlr:l~t:s li)r cl)ptll:lr ,,orbs dcpcn(.I.'.; jointly on Ih¢ dole0 ~mnct~, ,fl' I-~tlt Ihc ~tfl~jut:t "NI ~ ,utd the ()hjcct NI' [1')83, ')(1-~)1. %].1 Ills intuiti, m h:.~s hr.:on rcli~rmulntcd. :~ttgtt~Cnlt:d, ~md implcntcr~tcd in the RFI.A'I'US N~lUlztl I.:ulgtt:tgc Undcr~t:u~tling S~tem. 2 lllc implemented qu~mlilicztti,m thc,)ry is used by RFI.ATUS us it incrcmcntMly builds :t scmzmtic m¢)tlel. lhts method recovers clztn~ ~md identity inlbrmution from ~)rdinary English sentences. Alth()ugh the '~ctll~.mtic rll~dt:l tllUSt hE ,)CCZl~,i(mz,ll~ queried to re.'st)lve qtmntil]cutitmzd :tmt)igttitic~. the tnt. tht~0 is primz~rily syntactic :rod does n~t rcqtfire rues, ruing. The c()mpttt~di(mM simplicity and hn>zzd co~ct~lge oI" the thc,)ry zdh)w suct.'u.,,~,l'tH tltmntilqu:~tion ,d rl,~tlll phrz|st.:b ira most c()l)td;~r el;ruses. W,)tk b, in prt~grcs~ to c~tcnd tile an~d)'xis to p~rtitivcs zmd thcrchy yicltl zl c(~rnprehcn~,i~¢ analysis. While Ihi'~ ~lppr()at:h tl~cs not trc~,t ~u~.h difficult issues such z~s belief c()ntcxt'~ ~,,ttl mctz~l~ho,ic:fl u.'qlgcs, it tines :~tttlress mt,~.t lilct'nl cas,:x. Since the qu:mtific:ltion I. I will use "object NP" to refer to what ix I'ruquently called a "'predicate objecL" 2. [he experimental RELA'I US Naturai I.anguage Understanding System represents the con,bincd efforts of Gav:m I)t*ffy and the tmthor. Gavzm I)ttffy ix responsible for the parser, the calegt)rial disaml')igu;itor [[)ully. 1985b]. the lexicon and lexicon t, tilities. The author is respunsible lbr the representation system, the reference system, the component that nmps deep structure to semantics, the qtmntification system, the inversio, system, and the question-answering co,nponent. 35 theory is deployed in a natural hmguage system that narscs sentences and bttilds a :~cmantic model from them, REIATUS bccotncs, among other things, a system Ibr acquiring class structure inlbrmation fi'om ordinary English texts. Fig. I. Sentence Processing in RELATUS Syntactic Analysis Input: Text Stream Output: Surface Structure I)cep Structure Sententiai C'onstruint Posting Ittput: .qorface Structure l)ecp Structure Output: Sentential Constraint "Free Sentential Reference Input: .~ntential Constraint "~ree Semantic Representation Output: Sentence Men, lt'd into Scm'.lntic Representation T'he quantificati~m algorithm is embedded in a scntcntial const:aint-posting process [l)ufl'y ;,,d Mallery, 1%41 shown in ligure 1. Scntential ctmstraint-posting ClC']tc~ a cott.slratt;t tree that ct)rrcsponds to n)ughly what t~m,d'~,m,~li, mal Clamm:aians call h~gica! Jbrm. !1~,e c,)~iStlmnt Irct' is tl.'St2tl to pcrl~.)lnt intcrscntcntial ~cfuruncc (merging succcs:,ive sentences into a single scmanlic-n,=twt~rk :n::dcl) [Mallory, 19851. The input to c()nNtl'ailtt-i)osling phase is b~xh surf;.Icc struetttrc :.lnd dccl" StlUCltlrc cantlniczdizcd hy a It'an.,,Ibrmatit)nut pal:ser [I)uffy, 19:Gal. In a dcplh-first, h~ttont tip walk of the dccp structure, omstrainus describing grammatical r,;latio.:, arc posted on otto-terminal paise-graph n(~d~. 3 When verbs in major clauses (Le.. clat;scs ()thcr than relative cl:m.',cs or clau:;al a(ljuncts) arc reached, they ,~upcrvise the quantilication of noun phrases they ctlmmand. If these verbs arc copular vcrhs, the copular interct)nsu'ainl algt)tith.,n is applied. In other cases, a,t)ther experimental algorithm pcrli)rms qttantification by drawing on logical rclatitms I'mm surface structure. The result of this process in the sentcntial constraint tree. It is a hierarchical description ~1" gramnmticz, I and logical rclati(ms that is suitable input lbr the reference s~stcm. By sequentially referencing the sentences o1" a text, a semantic mo(lcl of the text in incrementally constructed. The ('opular Inlerconstraint ,klgorithm Within a constrainl-p~sting framework, the basic task of NP qtmntilicatitm is to) decide whether to post a conslraint marking the NI.' as an individual or a universal. Y, incc the ta.nk ill','~d',¢N kn~wing the spccilic subject and ,)bjcct o1" a l~pular vcTh. it is delegated to a higher ctmstitucnt, the vcrh. 4 lhis dclcgatmn is motiwlted h), tile prin('ip/e ,lhwtzl ~h'('ision-,:m(ing which holds that dct:isions shtmld I)c located where all required inlilrnlali(m is htllh avadahlc and i~r~xHnalc. In Ilti~ case. only Ihc VCl'h kll()WX Ihc ~tlcntllicn ~1" hlflh Illfllll phrases duc it)the hic~archr¢al ',tructtt0'c ~dgrammat~cal Iclalions. "l'htm, whcll ;.I VClh iX~nlS its own iut'~lCllli;ll Ct)li511';lilllS, it also tlirccts the tttmntilicati(m o1 NPs that it d~,ninates (e.g., ils subject and ~lhject). This pmccthtrc ~,,~,s rcl~rntuiatcd m a ¢~mstraint prtqmgat:t,n [Waltz. 19751 I'ramcv.ta k ht:cut+sc I'caturcs t)f a single ctlnslilucnt cannt)l he tlcturmincd JntlcpcntlctHly o1 t~thcr tC~llSllltlCltlS ill the r,cnlcntial dcfivatitm. Since tlUantilicati~)ll:.ll c~ m~,traint pr~ ~pagatc.,, in t~ ~th di rccti< ms. this i)llX.cNs is Zl t',pe ill C'~#t.~lillH'#ll "tllt'r~'t~/lSlrttittl. I-oltunalcl~, lhc p~ssihl¢ slates ~fl" "qlN arc ,rely lt)rctx tlclinilc, intlclinitc. :rod class. I~cuatmc Ih¢ mlrnhcr of I':t~nsihl¢ NP statc~ is small (3) :rod the nuiHhcr of variahh:s is also ymall (2), a simple tahlc-lo~)kup alg,)r~thm "c(m)pdcs' s,bjcct and ohjcct qtmniil~caLions for al! p~)ssihle conliguration.s of NP dclinitcnc.ss. 5 3. At present, the REI.AIUS system builds scntcntial ccmstraints using the cammical grammatical relations ()1" the sentence, tile quantilicz~titm status of n~xm phr;LseS, and the truth valuc.'s of verbs. Work is in pn~grc.ss to incorporate |empnr:d constraints on verbs, temporal adjectives, and various types of context markers. 4. The RELATUS parser u~.'s non-standard parse graphs. A 'kern" corresponds to a clause while a "verbal" is something like a verb phrase except that the kern tells it what its subject. object, and m(~lificrs are at constraint-posting time. For further "details. see Du fl'y [1985a1. 36 Fig. 2. Determiner Categories h Determiner/Parameters NP Classification "['he Definite • Fh is/rh ese De [i n ite "1 "h a t/T'll ose De fin i te No det & singular proper noun. Delinite No det & possessively modified. Definite A Indefinite An Indefinite Another htdefinite Some Indefinite No det & phlral. Indefinite All Class Any Class Every Class No Class llle actual task of determining the quantificational status of the NPs dccompt)scs into three steps. (1) The dcfiniteness of the noun phr,'kses is ,ascertained by examining the determiners and several othcr parameters. The algorithm is summarized by figure 2. Another algorithm dcscrihed by Iigure 3 is used Ibr determinerless plural NPs. (2) The quantificational slatus of the subject and object is (Ictermined by I(~)king each case tip in the table depicted hy Iigt, re 4. Putentially ambiguous cases (marked with an ~tstcrisk) may require referencing the noun phrase in the semantic model to rcs~lve the ambiguity. Example scntences li)r Ihc ca.~:es in figure 4 are I'Oulld in Iigure 5. (3) The vcrb-phra.sc node informs each NP of its quantilicatit)n (the results of stcp 2), and they in 5. The conversion of constnunt propagation into a table loq~kup approach is po~.sible in this special ~,,asc because there are only two variables, the suhjcct and the objccL In the general ~:a:~e, the sb, e of toe table is exponetitial in the number of ,/ariables. turn post corresponding constraints on themselves. Fig. 3. Categories for Determiner-less NPs Characteristic of NP NP Classification Singular Proper Noun Definite Plural Indefinite Possessively Modified l)elinite Animate Pronouns Dclinite In his discussion, ,hlckcndoll'[lgX3: 77-106, esp. 8X-91.94-10hl nifty c~,tcguri/c.', determiners acc()r(ling to the distinction between definite and indefinite . I have added classm,ss It) his scheme m order tt) cope with such determiners as "all', "any' and "every'. While Jackendol'fs examples use only the determiners 'a'. "an', and 'the', I have Ibund intcrpretati~ms Ihr additional determiners which are summarized in figure 2. JackcndtflT considers proper nouns to be definite and the same is done here, except in certain cases t)l" phnal proper nouns which are interpreted a.s the plural indefinite (scc $21 in figure 5). The addition of the cla~s cate~urization calls lbr the'class determiners in the bottom of Iigure 2. The determiner. "no', is trevtcd as the negation of 'all.' Thus, the NP is quantified as a ckL~s and the copula negated. While SI0 and S18 in I]gure 5 are valid, S19 is not. There are restrictions on where "no" can appear. It cannot modify b~)lh the subject ~md object. Nor can it modify the t~hjcct when the subject is indefinite (S19) ~r a universal ($2()). but it can when the suhj~:ct is tlcl]nite (,%1~). lhc~c rcstricti~ms ~ccm generally valid Ibr literal cases cvcn lhough some idiomatic and mct,lphorical ctmsUttctions nlay vi(}late them. Vari()us casc:~ of dcturnlincr-lcss NPs are handled hy lhc alg,)rithm dial determines NP dcliniteness. Those t:ascs arc listed in figure 3. The indclhlllC category may hc incompletely handled hccause lhe thco,'y tlous not yet encompass partitives -- imlcfinite NPs dlat ix,rtitkm collcctiuns of individuals or universals. Thus, determiner-less NPs with plmal hc:;d nouns are not amdyzcd Ibr partitive readings. 37' Fig. 4. Universe of lnterconstnfint Categorizations Case Sentence Determiner Classification Noun Phrase Quantification Subject Object Subject Object C1 SI. $2 Indefinite Indefinite Class Class C2 $3 Indefinite Class Class Class C3 $4 Indefinite Definite Class Class C4 $5, $6, $7 Definite Definite Individual* Individual* C5 $8. $9 Definite Indefinite Individual* Class C6 Sll Definite Class Individual Class+ C7 Sl2 Class Definite Class Class C8 St3 Class Class Class Class+ C9 $10, S14 Class Indefinite Class Class * Indicates the possibility ofquantificational ambiguity. + Indicates that grammatical sentences must have displaced objects~ P:ntitJve determiners may engender two readings. Ihe Nl's they modify can be read as either collc~:tt~ms o1' individuals or universals. Some partitive determiners sttch tin 's()llle,' 'each,' 'nu~st', "few'. or "many' are tt.'.,cd It) make statements abemt subsets hi" a coil,cotton. With the exception of "some,' thc~,c are missing from" figure 2 pending research about how to determine their quantificatitm. 'Some' is interpreted just as an indefinite because o1" its high frequency. I hc detclmincrs 'all,' "any," and "every,' wcrc included because they refer to the entirety of a collection. None of the partitive determiners, cvcn the ~mes currently used to determine dassncss, will be adeqtmtcly handled until completiun of continuing work ,m the syntactic parse graphs and the interactitm characteristics of partitives. S{~metimcs copular verbs take adjectives in the object position, leaving no apparent object. Some of these adjectives have a displaced ohjec/ as in $2, SI I, Sl3, .";15 and ~16 in ligure 5. Wcrc there actually no-bject, the qtmntil]cation or the subject w~tlhl bc determined in is~i'.'tiun (u.',ing a different algorithm). When the adjective has an ~bjcct. that object is u:;cd to perfimn the NP intcreonstraint with the subject. (Ja.,,cs C6 and C8 are imp~,,.,,ible ($22 and $24) unlcss the sentences have displact:d uhjccts (SI 1 and S13). tlowcvcr, this is not the cast: liar (26 where a copular verb is modal and has a partiuvc tletcrminer on its object. This suggests that pa~lilive readings of class determiners may make tht~e cases acceptable and that displaced objects simply make such a reading easier. Di.,pbwcd .~.hic¢'ts appear as the NPs to which "relative prememns'" brad in rula[l',c clatlSCS or app~.',i'..ivcs. 515 prc~idcs an example of intcrc(mstraint acn~ss u relative clausc. There, "a phih,,~q~hcr' is the displaced ,~alhjcct <~1' the diN~laccd ~hjcct, ":m hmian StOiC.' Interestingly. "a iqfih~:,~fdlcr ' i,~ al~,,~ a dihptaced object v, ith rc'q~cct tu "Mary.' Recall Ihaf. COl~,Stl'.~lint pusdnL: pr()ccc(ls Ih~m the bl)l.t()lll t)l' the ptllhC gr.:lph up lira hierarchy ~ffgrammattcal rcfaltt)ns with quanlllic:ltJon lidhw, ing al(mg and being g~vcrncd b~, major verbs. In SI5. qtlanttlicatl~m intcrctmS!.lamt is lirst ~;pplJcd to a philusolfller' and 'an hmian sit,it" by the c¢~ptnla t,l" the relative clause, then, rt is applied to 'Mary' and 'a phih~sopl~cr' hy the major copula. Since the fits! NP interc~'nstramt li,~cs "a phih~snphcr' as a universal, that rcsttll is then carri,.'d over int{~ the mtcrc~mstraint with 'Mary. In hoth S15 and St6. the qtmntilicati, mal constraint pr(~pa[,atcs across ck, usal boundaries becm~se both clause share the ~me NP as an ohject and a subject. Ca.,,cs such as these should not lead to incemsistent quamil]c:~titms. Instead. fl~e.~ slmuld aLzrcc. ,ttte:~tmg to the ~,,mndnc.,,s el" the algorithm. Jackcndc, IT [I')83:971 a,gues that cases C4 and C5 in figure 4 are semantically ambiguous. This amhi~uity s~urns only to hold liar the determiner "the" and is i'es(,Ived b~- a simple rclcrence of the NP in the semantic representation. 6 If the ambiguous NP has no referent in the current discourse Ibcus [Grosz, 1977], then the NP must be a universal. If there is a referent, it is either a universal or an individual, and the same 38 Fig. 5. Sentences Exhil)iling C'apul:lr Interconstraiut i "C i-~c (S 1) A dog is not a reptile. (Ge,eric categoriz, lion [Jackcndoff, 1983: 95]) i-) c i-, c ($2) An antelope is not similar to a Fish. i~C C ~C ($3) A priest is similar to all religious figures. i-'c d,c ($4) I':u'ailclism is not the panacea ofcombinatorial explosion. d,i d-,i (SS) ('lurk-Kent is the man who was given the martini by Mary. d " i d " i ($6) ('l;uk-Kent is ~ul)ernlan. (hlt,,lily [Jackcndoff. 1983: 95]) d,i.d.c d.,i.d,c (S7) The tiger is the fiercest I~e'ast of the jungle. d-.i i .c ($8) {;lark-Kent is ~ friendly super-hero. (Or,.li, ary calegori~almn [Jackcndoff. 1983: 95]) d ,Ld-.c i ,c ($9) The tiger is a fiightening be:Int. [.lackendoff, 1983:971 C ~C i*c (S10) No m'.mmml is a reptile. d,i c,c (SII) {;eorge was similar to every prcffe.~sor in the school. C~'C d~C (S12) All syeoph'.mts ere the heart-throb of vanity. C'"C C ~C (S13) l,'very man is similar to any 5iped. ¢ ~ C i-" C (S14) All men are Fallible creatures. d ,i i.c i÷c (S 15) Mary is similar to a philosopher who is close to an Ionian stoic. d-,i d-,i i,c (S 16) M:iry is .similar to the pifilosopher who is close to an Ionian stoic. d-,i d .i (S17) Clark-Kent [.'.; Ihe man drinking the martini. [Jackendoff. 1983:88-891 d~i c .c (S18) .loe is no reptile. t'-," C~ (S19) ° A mammal is,lo reptile. ($20)* Every mammal is no reptile. i.c i ,c ($21) Ilabs :l,u ~s common ~:. fruit flies. tl , C "" ($22)* The won|al! is all lawyers. d,i c,c ($23) The woman ctmld bc any I.kwyer. C " ¢ " ($24)* All nrmmmls t~rc eveJ~ warm-hhmded creature. (Dt, fi, Keness: L ~£ c) , (Quanlifiutlion: i ot c) indicates the ,ma!ysis of the NI" ,nder it. 7"he definiteness categories." indefinite (i). definite (d). ttn,l class (c). ?'he quanlifi¢atLon categories." mdivMual (i). class (c). " hldicales an ungrammalical senlenCe. 39 quantification should be cht)sen. Where both appear in the discourse lbcus, the individual reading is preferred. This is partictdarly important lbr C4 because the status of the subject is needed to predict that of object. In either case, both rnust have the same quantificational status. The analysis of NP qtmntification in COl+ular clauses is signilicantly smtplilicd by the Ihct that there is no nccd to analyze qt,antilicr scoping. "l'his li.)lk>ws fi'om the absence of ~, passive interpret:[[ion rt>r copt, lar verbs. rhcy are specialized in ctmvcying classificational inlinmation rather than exprcxsing active changes of ~,tutc..";into there is nu agent ,rod no t)hjcct which is acted uptm. passive c(m:;tructJ(ms can have no meaningful intc!prct:~tion. Interchanging the suhjcct and the object either has no cl'l~:cI t)n itlcntlly stattcmcnts t)l" inverts the classil]catum t,:la,thmship in t)tltcl" G.iscs. 'l'hus. the scln;.intJc spccialilutitm t)l" ct)ptllar ,.clhs Jn ctmvcying links t)f class hicr:trchic.', simplilics aspects of their s} nt.ictic analysis. Fig. 6. (.'lassificalion of ('olmlar I,inks in III"I,,VI'US Sulqect Re;tl ()hject l.i+,k ('l;t~ification Individual Umvcrsal Ordinary Classil]cation Univcrs:tl Universal Genu:ic Classification htdiwdual Individual Identity Relation Either Adjective Quality A Glimpse At Semantics Since I(FI.ATIjS increment:ally constructs a sonuntic m(Klcl o1' the sentences it analyzes, the CUl)ular intcrcnnstruint algorithnl allo,s a class structure it) be aut(mmtically ucqtnirctl. "Ihe way in which this inl0rmation is represented in R|-I.AI'US explt>its the encoding scheme underlying English usage of Ct>l)ular '.,'rbs. Ihis unc¢~(ling mctht>d ulh)ws Ibur lypcs (;f linking ruluu(ms t() he cn'.:(~dcd iising a singtu It)ken. (i.e.. "be'). Ibis cnct~ding us stiunnlari/cd ili ligurc 6. Since the types can he dil'l~:rcntiatcd acc+~rding to the qtmntification of the nodes linked, the unique rcprc:;cnt;,titm of each link type docs not rcttuirc the introduction t)f ad hoc tokens. Orthnary and generic classilication arc used to construct the taxonomy. When two individtmls arc linked by a 'be' 6. Such a strategy h,'m bcen ff)llowed For other types of ambiguous preposition and clause bindings tHirst. 1981. 1984; Duffy, 1985bl. relattt)n, idcnthy between them is represented, rdcntity between two universals is represented with two generic classificatio,ts indicating that each universal is a subset of the uther. For predicate adjectives, a spccml token (e.g., 'HQ," 'HAS-QUAlITY') is used as the relation and the adjective as the object in order to represent a tree-place property [Winston 1980, t982]. This avoids confusion when a ~,Vt)l'd token has rises Both as an adjective and a noun. Because REI.ATUS incorporates a theory of #Herprelive semantics, where syntactic cant)nicalization is perlbrmed on input and semantic equivalence is dcterminetl 4)nly tim)ugh reastming twcr a s)ntactically canonical rt:prescntation, this cnct;tling system is p:,rticularly appropriate, gecat,se no p(~st-proccs.,,ing is ncctlctt tt) ",uhstittHc tlistinct tokens fi~r the different types of linking rclalitms, this cnct)tling :lls() .umplil]cs quanltl]t.alitm t)r C()l'mlar el:roses, :.llld thcrel't)rc, the ctmsttamt p()sting p!<)ccns in gcnc!ul. The cnct)ding rtlcth()(l t)011+,,' l~tltlilCS .3 '-,mall c(Hl'-,tctrtl, jnLIL':~:',c i[I time li>r walking the t.rcatctl clas.,, struc'tttrc, lhtls, the pq)tcntial gain in cllit.Jcr+cy hy ur, ing a mare uxplictt cnct)tling technique ts hint gin:if+ :mtl might hc ~)l'l~,ct by t)l hc! I,~ctors. Conclusions lhc ~opula, huurcon,~tra,nt algoriflun prc,xCflLcd ill tills paper flus hccn ~llll)lL,,,ill~L,, rtd3tlS, t m l;.ll~C L.d.\L applicatltms ovcr th,: pa:,t +vuai. ()nee the research on r~;+irt+tJvu.s iis c¢)nli'Jlctcd, the :+Ll+t)l :thm will ut)~ur :at+ even lalger i+l(+p(+rti(+n (~l C~+l+tll~tl- vulb ca~cs. Wt!llk 11:.15 hccn dollc 011 c()ptllnr qt~c~tit)ns hut Ls too c,,mplcx Ibr disctts,;Rm hurt:, largely due t(+ pragrnuttc inturacu++ns. Conjum.titms havc he,.:n treated ju.,,t like t~rdimtry NPs, ,gxo.'pt that crr,)r checking on,arc:+ that .ill ?;Ps ill conjutlctJt)vts agree in dcl]nitcncss. Ihc idu:t urc()n~,traint prt~pagalitm has been ~:xtcndcd cxpcrhncnt:,lly to n~m-copular verbs using a dil'fcrcnt ptopugat:on alg<+rtthln. llae approach has hcen succcssl'ul thus rtr. Ht)wcvcr, nlt)ae research is required to analyze intcr:!cti~ms between '.andros qu:mtificatitm alg~>rithms and ':) .~l.',,cClt.~lJn lhc prt+f+ag'~tit,n characteristics or dil'furcnt verbs, accordint,, t(> their senses and rncanings. Oti.'.mtlllcr sc~ping. :dgtJrlthm intu'ractkm, and diflicrential pr()p:tgatit)n are some t~l" characteristics ()r general ctmstitucnt interconstraint that make it more dil'licult. In gencf:tl, propagation ,ff q,,antificational constraints, seems a promisin~ approach =o previously rccalciu'ant problems. Even sO, strong psychological claims rot,st await further research and exhaustive atmlyses acros~ languages. 40 Recent interest in developing lexicons to support computer understanding of natural language [Walker and Amsler, 1985] suggests die need for cfreetive methods of attgmcnting our Icxicographical knowledge using large corpora and unrestricted text. Selection restrictions are an important type of information to accumulate because they are needed not only to distinguish different senses of words but also to recognize metaphorical uses. Since accumuhttion of selection rcstrictio,s rctluires it, acquisition or taxonomic ialb.'mation is a priority. The coptflar intcrconstnlint algorithm introduced in this paper provides a basis lbr acquiring large taxonomics from unrestricted texts. A filter can be used to quickly pmnc all non-copt.lar sentences :t~ well as (lilfictflt copular sentences involving hclicl: and perhaps, time contexts. The remaining scntc))ccs can be parsed, quantified and represented in a large semantic model. This research would not only advance our knowledge of natur'd laxonomics and selection rc.~tricti~)ns h,t it woultl also generate empirical data tt:)clitl I~)r ttlose studying "dcfault logics' and stereotype hierarchies [Minsky, 1975; Keil, 1979: Rotter, 1980; l-',rachrnan, 1982; Etherington and Rotter, 1983]. One dil'l]ct.ity with this research program is that an uncertainty principle is at work: The taxonomy used to determine selection restrictions itself depends on recognition of nlctaphors fllrottgh selection restrictions. Success in this lexicographical task will require the careful development of effective research strategies. Acknowledgments This paper was improved by comments rrom Jonathan Connetl, Gavan Daffy. M:lrgarct Fleck, Robert Ingria, David McAIIcster, Rick Lathrop, and David Waltz. This i'ese:lrch was encouraged mid supported in various ways by Hayward Alker, Mike Rrndy, Berthold Horn, Tom Knight, Marvin Minsky, Gerald Sussman, and Patrick Winston. Gavan Duffy's parse graphs made this research possible. The REI.AFUS system was dc:,igncd and implemented by the author and Gavan I)uffy. This research ~,as done at the Artificial Intelligence I,abonttory of the Massachttsctts Institute of Technology. Support for the l,aboratory's artificial intelligence research is pn)vidcd in part by the Advanced Research Projects Agency or the Department of Defense under Office of Naval Research contract number N00014-80-C-0505. Responsibility for the content, of course, remains with the author. References I~,rachman, Ronald .!., 11982], "What 'lsa' Is And Isn't," I'roceedings ~" the b'ourth Bienaial Conference of The Canadian Society for Computational Studies of Intelligence, pp. 212-2t Daffy, (;avan, [1985a], "Bootstrapping Performance fi'om Competence: The Development Strategy of the R EI.,A'['US Parser," Ibrthcoming. Duffy, (;avan, [1985bl, "Catego*'ial Disamhiguation," Ibrthcoming. l)uffy, Gay:m, and .Iohn (,'. Mallory, 119841, "'Referential Determinism and Computational Efllcicncy: Posting Constraints from Dccp Structure" Proceedings of the /lmerican Ass,)chttiott for Artificial Intelligence, pp. 101-105. Etheringto,, I)avid K., anti R;lymond Relier, [19831, "On Inh._'ritance I-liclarchics With Exceptions, "' I'roceedings oJ'the Amcricat) Association fi~r Artificial Intelligence, pp. 104-108. (;rosz. Barbara .1, [1977]. The Representation and U~e (~ Focus m Dialogue Understanding. Menh) Park: SRI International, Technical Note 151. llirst, (;raeme, [1981[, Anaphora in Natural Language Understanding: A Survey, Berlin: Springer-Verlag. l lirst, Graeme, [198,11. "'A Semantic Process for Syntactic Disambiguation,'" Procecdb~gs of the American Association for Artificial httel/igence, pp. 148-152. J,'tckendoff, Ray S., [1983], Semantics tznd Cognition, Cambridge, Mass: M rr Press. Keil, Frank C., [1979[, Semantic and Conceptual Structure: ,,In Ontological Perspective, Cambridge, Mass.: Harvard University Press. Mallory, John C., 11985], "Constraint-lntcrprcting Rcrcrcnce,'" Cambridge, Mass.: M rr Artificial Intelligence I.aboratory, AI Memo Nt~. ,q27, May. 1985. Mi.sky, Marvin. [1975]. "'A Framework fi)r Representing Knowledge," in [Winston, 1975]. pp. 211-277. reiter, Raymond, [19801, "A Logic for Default Reasoning,'" Artificial Intelligence. pp. 81-132. 41 Waltz. David I,.. [1975J, "'Un(Icrst.*,,Iding l.ine Drawings of Scenes with Shadows," in Wit:stun [1975, 19-91]. Walker, l)onald E., and Rohrrt A. Ainsler, [1985l, "The U'~c oI" M~,chin¢-I~,cadahle I)ictionarit:s in Sublanguage A~lalysis,'" in Ralph Grishman and Richard Kittredge, editors. A'ubhm~uage: De.v'ription and Processing t.awrence Erlbaum Associates, 1985. Winston, I':llrick. [1975l, editor. 771e Psychology c~ ('omputer Vision, New York: McGraw-flill. Winston, Patrick I!., [1981|1o "'l cvrning and Reasoning by Analogy," ('ummunicatinn.~' oj'the ACM. 23 (12). Winston, Patrkk I1., [1982]. "'1 earning New Principles From Precedents And Exercises,'" Artificial Intelligence, 19(3). 42
1985
5
MEINONGIAN SEMANTICS FOR PROPOSITIONAL SEMANTIC NETWORKS William J. Rapaport Department of Computer Science University at Buffalo State University of New York Buffalo, NY 14260 rapapor t%buffalo~csnet-re la y. ABSTRACT Tilts paper surveys several approaches to semanttc-netw,~rk seman- tics that have not previously been treated ~n the AI or computattonal lingutsttcs hterature, though there ~s a large ptulu. ~)ph~cal hterature invest)gating them m ~mledetad. In parttcular, proF~n~onal semanttc networks (exemphhed hv ~,NeP%)are dis cus.~d, it ts argued that ~mlv a Iull'; mtenstonal ("Mem(mgtan") semantics is apprt)prtate I(~r them. and se'~eral \|eln(~nglan svstenls are presented. 1. SEMANTICS OF SEMANTIC NETWORKS. ~emantlc netw¢~rks have pr(~ed rt~ I~ a uselul dahl ,,true.lure for representing mlormatttm. =.e., a "knt~wledt, e'" repre~ntatmn svs tenn. (A I'~tter termmtdogv ix "'belief" teptexentatiott system; t.f. Rapa~)rt and Shaptn~ 1984. Rapap(trt 198.1hL The ~tlt'.= =,, an ,lid one: Inheritance networks (Iqg. I), hke tht,se ~1 ()ulllti|II 1968. (,has feather~) Fig. 1. An inheritance network. I~>hrnw and Win(~grad's KRI. (1977), ,,r IIra~.hman',, KI.()',I. (1979,), bear strong tamttv re~mblanues t() "l'.wphvrv',, I'ree'" (I ~t,. 2)---a mediaeval device u.~d t~> dlustrate the .\r:st,.~ehan 'het,rv ,~I definn~(m by ~pe~:e~ and d~fferent~a ((-I. Kret~'mann I~'hh. ('It 2; Kneale and Kneale It~hh: 232). It has been r~,nted (~ut that titere ~s nothing essentmlly "~emanttc" about semantic networks (llendnx 1979; hut cf. Woods 1975. Brachman 1979). Indeed. v~ewed ,as a data structure, it is arguable that a semantic network m a language (r,,~.,~lhlV w~th an a~st~lated Ingle (~r ~nference mechanmm) f(~r representing inlornlatl(}n ah~)ut ,aline d(,mam, and, as such, IS a purely syntactic entity. They have (-(~me to he (-ailed "semanttc'" primarily hecau.~ ~d their uses as wart, ~ll representing tile mean- ings (~f hngutstic !tems¢. As a notatt(mal device, a semanuc net'a.'tlrk ~an ~tseil be g~','en a semantic,s. That is, the art, s. nc,Jes, and rules (~l :. semantic net~,'(irk representational system (.an 1~' given interpretations, in terms (if the entities they are u~d tit represent. Witilout ~;uch a semantics, a semantic network is an arhltrar'¢ not-':tt(mal dev;ce Imble tt~ mtsmterpretat=on tel. Wtx.ds 1975; I!,rathman i977. 1983; Mclgerm~ltt 1981 ). The task (~! prov:ding a semantt~s For semantic networks is more ak=n tt~ the task t)f providing a ,~mant~cs For a language than I'()r a logic. ,crate in the latter ca.;e, hut not m the (.jenlls ..................... > Differentia ..... > C()R~)~ / NON-CORPOREAL Species .............. > ~ L / "- R A ~ /NON-RATIONAL "~. ~ < .... Principle of Individual)on (-'P~'~ M~., k-'~y "_ <---Individuals Fig. 2. Porphyry's Tree: A mediaeval inheritance network. l~rmer, nt,tltms like al gunte;~t validity mu,d Fn: c,,Iahllshed and ctm- neLthHl'~ rl~u~.l |~' made with JXl(~nl?., ,nd rules ~1 Hllerent,¢. ~ui- nltrl,ltlng ideall',' Ill ',,~undne~', and Ltmtpletene,,,, thet~rem',. }lut unllerlvinu the h~glc"~ ~.enlantlL:~, there must P~ ,k ~;erllafltlcs I(ir the Itlglc',¢ underlvin~ I.lngthl~.e. alltl thl,~ ~.~.L~uh.I h~ ~lkell in terms ~l '~uLh .i rltlfll~n ,1~ llldJflnitt,~. Ilere. tvpltallv, .in inlerpret.dlL~n lunc tl(in IS e~tahllshed P~t~.~.een K"*'tttdLtlCa[ iter11~ Irtlnl the language l, and ~lntt~l~lc;Jl items Inml rile "~(~rtd'" W lhat the langua~de is t() de~t, rlt)e. J'hts, m turn. ~ u~,uall~, at.conlphsiled b',' dexcrdlm~ the 'Aorld in .in{ither language. 1, . and '~htl~.lng that /'. and /'4 are nld.ll'l(in;ll V,lrt;infs hv ,~ho',X.'lng that tile'*' ,ire l~m{)rphl(-. Recentlv. hngu~sts and phdosopilers have at'cued for the ~ml'*~lrranke (~1 intenaional ,~..muntlt:S For natural languages (t;l'. ~lon- tat;tie 1(~7.1. I~ar,~ms 1981). Rapar~lr? 1981L .-\t the same t~me, com- putat~tmal Ilnt~ulS(~; and ~ther .-\1 researche~ have n~£un [o re~:{)g- nt/~ tile ii~lr~rtanke (~1 representing intensIonal entitles (cl. \,V(x)ds 1975. IIrachman 1979. Mc('arthv 1979. \lards and ~,hap~ro 1982). It ~ems rea,~)nahle t|laI .~ ~mantlcs For such a repre'.~entatl()nal sys- tem should ~tself he an mtensmnal ~mant~cs. In tht~ paper. 1 ()ut- line ~,.'eral fully tntensttmal semantlc.S for ~nten,cltmal semantic net,x(~rk~, hv discu~sHag tile relatmns between a semantic-network "!anguage'" /, :~nd ~','eral ~anthdates For L w . For /,. I Focus on ~,haptro's propositional ,Semantic Network Processing System (SNell.': Shaptn) 1979). For which Israel (1983) has offered a I'w~sible-w~lrlds semantics. But p~stble-worlds semantic,s, while countenancing mtenmonal entities, are not fu/,/y intensional, since they treat mtens,mal entities extensionally. The L w s 1 di~uss all 48 have t'ullv intenslonal components. 2. SNePS. A SNePS semantic network (Fig. 3) is primarily a proposi- ) / Fig. 3. A SNePS representation for 'A l~rson named "John" ha~ the proper~F of being rich.' tional network (see below), it can. however, als,) he used to represent the mherttabthtv of properties, e~ther hv explicit rules or by path-based inference (Shapiro lq781. It ctins~stx of labeled nodes and labeled, directed arcs satl~fwng (inter alia) the folh)wmg condition (of. Malda and Shapiro lq82): iS) There is a I-I ~orrespondence betv, een nodes and represented concepts. A concept is "anything about whtch mlormat~on can he stored and/or transmitted" (Shapiro 197q: 179). Widen a semantic net- work such as SNePS ~s u~d to model "the behel structure ol a thinking, rea.~onlnt.,, language using be,ng" (Matda and Shaptru 1982: 296: of. ~';haplro 1971h: 51.),;. the ct)nt.epts are the oh)ectx of mental (i.e.. mtentu)nal) acts ..u~.h as thinking, behev:ng, wishing, etc. Such oblect,~ are mren~mal i~.t. Rapaport l()7g). It t'ollov,'s I rc,m (%) that the arcs do not represent concepr-s. Rather. they repre',ent binary, structural relations between con- cept.s. If ~t )s des)red to talk about certain relations between con- cepts, then tho~e relations must be represented by nodes, smce they have r.neJt become objects o= thought, =.~, concepts. In terms of Oume's dictum that "t~ be is to be the value of a [hound] variable" (Qume 1980: 15; cf. Shapiro 1971a: 7q-80). nodes represent such values, ar~s do not. That Is. given a domain of dlscours~--mcludlng ~tems, .'~ arv relations among them, and prolX)S~tions--SNeP% nodes ~,ouid be used to represent all members t)l the domain. The arcs are used to structure the items, relations, and p)(,I')()'~tJons ,)l the domain into ((:chef.) prl)p(~sltmns. As ~n analogy, SNel)% arcs are to %Nel). ~, nodes as the svmn()ls '~" and "+' are to the symbols %', '5.P'. ond "VI )' in the rewrite rule: S -, ";I ) + VI ). It ~s because m) prorxts~ t~ons :are represented hv arcs that SNel)% ts a "pr()rx)sltlonal" seman- tic network (c:. Maida and Shapiro 1982: 292). When a ~manttc network such as SNePS is u~d to model a mind, the nodes represent only intensional ~tems (Maida and Shapiro 1982; of. Rapaport 1978). Simil-',rly, if such a network were to be used ~s a notation for a fully lntensional natural- language semantics (such as the semantics presented in Rapaport 198-1 ), the n(~es would represent only mtensional items. Thus, a semantics for such a network ought )tsetf to be fully mtensional. There are two pairs of t3tpes of nodes in S.Nel)S: constant and variable nixies, and atomic (or individual) and molecular (or propo- situmal) nodes. (Molecular md~wdual nodes are currently being implemented: see Sect. 7. 8. For a dt~usstt)n ol tile semantics of varmble nodes, see ShaDro 1985.) Except for a few pre-de)ined arcs for u~ by an inference package, all arc labels are ~hosen by the user: such labels ,re completely arbitrary (albeit often mnemonic) and depend ,m the domain being represented. The "meanings" of the labels are provided (hv the u~rt only by means of explicit rule re)des. ',~.hlch allo~' the retrieval ,)r constructam (by referencing) of pr(~l'xtsltlonal ntvJes. 3. ISRAEL'S POSSIBLE-WORLDS SEMANTICS FOR SNePS. David Israel's semantics f~r SS, ePS a~sumes "~he general framework of Knpke-\lontague style model theoretic a~counts" (Israel 1983: 3), presumahlv because tie takes tt as "quite ~lear that [Malda and Shapiro] ... vnew their formahsm ,isa '~,lontague type type theoretic, inten,~uonal system" (Israel 1983: 2). lie mtrc~luces "a domam I) ,,I i')()~.sible entitles, a non empty ,~t / ( . ,)l ~)~.Sl- ble ~.or[ds), ,lnd .... l distinguished element w (~) I h) represent the real world"(Isra¢l Iq83: 3). ..\n individu,d,',)ncept )s a lunc rlon ic : I ~ I). linch constant mdiv)dual %Nel)% node =~ m,N.leted hv an ic; variable mdl~)dual m~ies are handled hv ".~.~)gnments relative to such a model", l[()~.c,.er, predicates--which, the reader should re,.all, are al.~) represented m %\el)% hv t.~mr, tant mdlvlduat n(xJes~are modelled as lunctl,,)ns "I r()m / tn!i~ the p()~.er set ol the set ol redly)dual Loncept~" J)ror~),,)tlonal nt~Je,~ are mL,.ielled bv "'functtons from / mto{Y . I'}."alth~)ugh Israel Icets th,~t. "hvr~r- mtens.mal'" h,glc ~,,uld Ix~ needed m ,,rder t,, h.ndle proD,.~Uonal attitudes. Israet has dlthL.ultv mterpretln~ \II!MIII'.R. ('I.AS%. ,,nd [SA arcs in this Irame~x'~)rk. "l'hl~ is to be eM"~.tcd for tx~,,, reasons. Ihr~r. i) is arguahtv a mistake to i~.terpret them (rather ~han g~,, mg rule~ lot them}, since they are arcs, hence arhttrarv and rain- conceptual. Second, a pos.slhle-worlds semantics is not the best approach (nor ~s tt "clear" that this m what Ma=da and Shapiro had in mmd--indeed, they explicitly reject it: cf. Malda and Shapiro 1982: 2c)7}. Israel himself hints at the mapproprlatene.~ ol this approach: H" one )s l'(~u.~ing on prop(~monal attitude{s} ... =t can seem hke a waste ol time to mtroduce m(Mel-the~ret)~, ac- counts()l'intens.)nahrv at all. Thus the air of de~F)erat)on alx~ut the loregomg attempt .... (Israel !O83: 5.) More~wer--and sigmficantlv--a possible-worlds approach ms mis- guided it' ,,ne wants to be able tn represent intpossible oh)errs..~r, ,,ne should want to it" one ts doing natural-language semanttcs (Rapa- I~)rt 1")78. 1981: Routlev 1979). A fully mtensmnal semantic net- work demands a :ullv mtenstonal semantics. The mare rival to klontague-stvle, p(,~,,~hle worlds semantics (as well as tt) ~ts close kin. '~ltUatlon sem~nllL% !lklr~.~.l'.:.e and Perry lq8311 ~.~ Meinot~iatt ~emonlics. 4. MEINONG'S TIIEORY OF OKJEC'TS. A!cxlus Metnong's (19(M) theory of the oh)e~ts of psvchologl- ~i acts ~s a more appropriate foundation for a semantics of proposi- tional semantic networks as well a.s for a natural-language seman- tics. in brier, 5,1emong's the()rv camsists of the f~)llo~ing theses (c|'. Rapap)rt 1976, 1978): (MI) Thes/s oj" Intentionality: livery mental act (e.g., thmkmg, believing, judging, etc.) is "directed" towards an "ob.)ect". l'here are two kmds of Memongian objects: (I) objecta, the individual-like oh}ectx of such a mental act as thmking-of, and (2) 44 objectives, the proposttlon-hke objects tat such mental acts as believlng(-that) or knowing(-that). E.g.. the object of my act of thinking of a unicorn is: a unicorn; the object or mv act of believ- ing that the I~rth is flat is: the Earth is flat. (M2) Not every object of thought exists (technically, "has being"). (M3) It is not self-contradictory to deny. nor tautologous to al'firm. existence of an object of thought. (M4) Thesis of Au~sersein: All objects of thought are ausser- se/~nd ("beyond being and non-being"). For present pur~ Aussersein ts most easily explicated as a domain of quantification for non-existentially-loaded quanttfiers. required by (M2) and (M3). (MS) I!verv oblect of thought has properties (technically. "Sosein"). (M6) Principle of Independence: (M2) and (MS) are not incon- sistent. ( For more d,~'ux, c,on. if. Rapal~rt I984c.) ('atoll'dry: liven oblectx of thought that do not exist have properties. (M7) Principle of l"teedom of Assumption : (a) I!verv set ol properties (S, asein) ci~rres(~mds ti~ ,in ~hlect ~fl" thought. (b) livery oblet:t t~l thought can be thought ol (retatl'.e to certain "perfornlance'" IlnlltiltlonsL (x,18) ~me objects of Ihought are ,ncomplete (i.e.. undeternllned with respect t(a ,~lme prtIpertleSL (Mg) The meaning tal every ~ntence ;anti noun phrase Is an -hi~ct ~I thought. It should be obvious that there is a close relationship between Memong's theory and a rullv mtensnonat ~mantlc network hke %NePS. SNel)S it.'.,elf ts much hke .4usse~ein; %haplro (personal communication) has said that all nixies are :mpIncntlv m the net- work ,ill the ume. In particular, a SNePS base (i.e.. attempt constant) n(xJe represents an ohlectum, and a %NePS pr(q'x~ltn(mal nixie represents :in ,~hlt~tnve. Thus. when %NeP% ,s used as a mtx.lel ~,1 ,~ mind. pr(q'xxstttonal taxies represent the able, tires ol behels (d. Matda and ~hapnro 1982. Rapal'~rt and ~,hapiro 1984. Raparxwt !984b;; and When S\-l )':, t,¢ used xn a natural language pr(x:e~.,~ing system tcf. Shaptn) 1982. Rapal~)rt and %hapirn 1984). Lndivtdual nixies represent the meanmgs ill' noun phra~s and verb phrases, and pr(arx~slttonal taxies represent the meannng'~ (af sentences. Memong's theory wa.s attacked by llertrand Ru~setl tan gr, aunds of inconsistency: (1) According t(a Meinong, :he round square is boil: round and square (mdeed. this ,s a tautology); vet. according to Rus~ll. ~i" ~t is r(aund, then ~t ~s not square. (2) %lm~- larlv, the extsung .~{)lden mounuHn must ha;e .ill three of its definmg prtaperttes: benng a m(,untam, h~mv ~,,lden. and existing; but. as Russell re)ted. I: doest(t exu'~t. I('l. Rapapt~rt 1976. 1978 It)r rel erences.) There have bee.n several I.rmahzatnons ,fl Melnonglan theories in recent philosophical literature, each of which overcomes these problems. In ~ul~,,quent ce~tnon.~ I briefly de.~rxbe three of these and show their relatmnshir~ to SNePS. (Others, not described }'.ere. include Routlev 1979----cf. Raparx~rt lqg4a--and Zalta 1983.) 5. RAPAPOIIT'S THEORY. On my own reconstruction of Meinong's theory (Rapaport 1976, 1978--which bears a coincidental r~mblance to McCarthy 1979). there are two types of objecLs: M-objecta (i.e~ the objects of thought, which are intensional) and actual objects (which are extensional). There are two modes of predication of properties to these: M-objects are constituted by properties, and both M- and actual objects can exemplify properties. For instance, the pen with which l wrote the manumnpt of this paper is an actual object that exemplifies the property of being while. Right now. when I think about that pen. the object of my thought is an M-oblect that is con- stitLaed (in part) by that property. The M-object Jan's pen can be represented as: <belonging to Jan. being a pen> (or. for short, as: *J. P>). Ileing a pen is also a constituent of this M-object: P c <J. P >; and 'Jan% pen is a pen' is true in virtue of this objective. [n addition. <J. P > exemplifies (ex) the property of being consti- tuzed by two properties. There might be an actual (abject, .say. ~. corresrxmding to <J. P >, that exemplifies the property of being a pen (iv ex /" ) as well as (say) the property of being 6 inches &rag. But being 6 inches long ¢ ('J. l" ",. "['he M-object the round square. • R. A' ",. IS c,nstntuted bv pre- cn~ly two properties: being round ( R ) and being ~uare (S): "The round square is round' is true m virtue of this. and 'The round ~uare ts not .~luare" ts fal~ ,n virtue of it. But (R, S > exemplifies neither of thine pn)pertles, and 'The round ~quare ts not ~uare" ts true In virtue of lhtll, i.e., 'I'~" Is .imhl~UOUS. An ~'| tlhleCt o eXl..ls ill there is .n .ctu. I ,~hleCt tl th.t Is "'"kin-correlated'" wnh It: ,, extsrs lfl' 3(,[ t, %( "o] Iff" ]c~l"[l'" c o • ,tex 1'" 1. X, ole th.t tnct~nlplete oble~.ts, such am .Y. I'',. can ex,st. Ih~wever. the \t-.hle¢.-t the existing golden mountain. < E. (i. M >, has the property t,l exnstnng ( hecause 1:" C , 1:'. (;, M >) hut does not exnst (because 3t~{t* S(7 • I:'. (;. M >]. as an empirical fat.t I. The mtensmnal fragment ol this theory can he used to pro- vnde it semantics I.r %NeP% m mut.h the ,~lme way that It can been u.,~d ttl provide a ,,emanttt.s lt)r natural languaEe (Rapap(irt 1981). %Nel)9; hase nodes can t~ taken to represent \1 t~b~ecta and prl)pertles; %Nel)% prt}rx~ltlimal IIIM'kN L.n i've taken t(~ represent \1 oh~ectlves. Twu ,ilternatixe'~ ix,r networks, rcpre'-~:nIlnL, tile three \| ,ff~lectlves: R t. • R.S',. .%' L • R.S ,..rod ,R.S;, ex bein e iml~ible are ~,ho~.~. n in l:ig,~. 4 ,nil 5. Ir}le ,,¢.,Lolid Lan }~' it,ceil t()d~.iud "'('lark's Fig. 4. A SNePS repro~n~tion of "The round square is round', 'The round square is square', and "The round square is impossible' on Rapaport's theory. paradox"; ',.ca. Rapalx,r! 1978. It~82.) ,-\Ltual (i.e.. extensnonal) oh~cts, however, sht~uld nl~t be represented (~1, \lalda and %hHplrl) 1982: 2t~h t,~). I'. the extent to which %uch ot)le~ts ;ire essential to this %|etnon~lan Iheorv. the present thei~rv Is r~r|lap~; an map- proprtate tree. (A similar remark holds, of course, l'or Mc('arthy 1979.) 6. PARSONS'S THEORY. Terence Parsons's theory of nonexistent oh]eeLs (1980; cf. Rapa~x~rt 1976. 1978. 198.5) recognizes only one type of ob]ect-- intenstonal ones--and onl~" one m(xle of predlcatton. But it has two 45 Fig. 5. An alternative SNePS repreuncation o£ 'The round square is round', ~rhe round square is square', and "Tho round square is impossible' on Rapaport°s theory. types ill" properties: nuclea~ and extranuclea~. Tile tormer includes all "ordinary" properties such as: being red. being round, etc.; the latter includes such properties as: existing, being ~ml~t~sthJe. etc. I~u[ the thstlnctnon ts SlurrY, s, nce for each extranuclear pn~perty, there Is a ct)rresl~)ndlng nuclear one. J:or ever',' set ~d nuclear prtt pertles, there Ix a unique ohlect that has ~nls," rh,w,e prt~l~rt~es. Existing ohlects must he ct~mplete (and. ~tf ct~urse, ctmslstent). though not all such ohle~ts exist. For instance, the Morning Star and the I:'vening Slat don't exist (tl th~ are taken to ct)nsnst, roughly, of only two properties each). I'he ~ound square, of course. ts (and only ls~ hits round and square ,and. ~, ~sn't non-square; through tt is. for that rea~am, lmp~.xsd~le, hence not real. .-ks for the existing golden mountain, exintence ix extranuclear. ~l the set ~1 these three properties doesn't Ila~.e a cttrre.~p~mtlung ~)htect. There is, however, a "'watered do~ n". nuclear ~ersion ,~1 existence, and there is an ex=stm~ golden mountain that has Ihat property; hut it didn't ha',e the extranuclear property ~,1 existence, and. '~ ~t doesn't exist. Parstms's the~lrv could pn~ tdea semantics for SNeI>S. though the use of two types of properties pla~ restrictions on the po~tble uses of SNePS. On the other hand, SNePS could he used to represent Parsons's theory (though a device would be needed for marking the d~sttoctlon between nuclear and extranuclear properties) and, hence. tt~ether with I)arrams's natural language semantics, to provide a hX)t f(}r comptit:ttit)nal linguistics. Fig. h suggests how tilts might be d~me. Fi~. 6. A SNePS representation of The round square is round, square, and impossible' on Parsons's th~orT. 7. CASTANIrDA'S THEORY. Ilector-N~ri Castan'eda's theory of "guises" (1972, 1975a-c. 1977, 197q. 1980) is a better .~ndidate. It is a fully intensional theory with one type of oh!oct: guises (intensional items corr~ponding to ~t.q ,if properties), and one type of property. More prettily, there are properties (e.g., being round, being square, being blue .... ). ~ees of these (called guise cores: e.g., {being round, being squaret), and an ontic counterpart, c, or the detimte-descriptlon operator, which ns used to form guises; e.g.. c{being round, being square| is the round square. Guises can be understood, roughly, as things-under-a-descrtptmn. ~,s "facets" of (physical and non- physical) ob.l~t.s, as "roles" that ohjecr,s play, or, in general, as objects t)l" thought. Gui~ theory has two modes of predication: internal and external In general, the gui~ cl... F... } is-internally t'. I:..g., the gut~ (named by) the round square is-mternally only round and square. The two guises the tallest mountain and &It. Everest are related hv an external mode of predication called consubstantia- lion (C'*). Consuh~tantmtnon is an equivalence relation that is u~d in the analyses of (I) external predication, (2) co-reference, and (3) existence: l,et a = c {... /"... } be a guise and let a[fi l =~f c({. . . 1" ...} u l(; }). Then(I)a ~s-externally(; (in one ,sense) if C~(a. a[G ]). For instance. "the Morning Star ts a planet' is true because C~( c t M. S }, c { M . S, P }): i.e.. the Morning Star and the Morning Star that is a planet are consul~tantlated. (2)Gut~ a "is the ~me as" gul~ b .I and ~mlv d" ('*ab. I:~r instance, 'the M~)rnlng Star tx tile ~me as the Evening Star" ~s true because ("(tIM.S}. ~.Jt",S}). .-\nd (31 ,t exists, tl ,Had ~niv H 'here I~ .t guise b such that (",lb. Amtther e\ternal nl,~e td' predt~atl~)n ~x ~,msociati,..n (('"). This ts al~ an equivalence relalltm, hut t~ne that holds between gu0se~ that a m0nd has "put together". ~.e.. between gulwes m "behef space". I'(~r unsran~.e, (" "(llamlet. the Prm~,e ~f I)enmark J. (" anti C" ct~rre~p~md alm~sr exactly r(~ tile use ~t tile I'OUIV art sn 'q,NePS. \lalda and Shap~n~ ~I'IS2: );1131~ u.~ the I-{)UIV ca~-frame to represent t,o relerence f ~vhlch us ~hat ('" us), hut, .~s I have suggested In RapaI~rt lt~84h. I:(J('l\" m~re prnpertv repre~ntx believed ct~ relerence-- ~A,'hl~.h Is '~'ltat (''= IS. It sht~uld he clear h,~ gu:~ the~rv can pnw~de a ~mantncs It)r 'qNeP%. Ilg. 7 ";ugge'~ls h~v. thus m~t, ht h~ done. %~nle pn~hlems remain, ho,x ever: in p.lrtlcular, the need t~ pn:,tde ,= SXeP ~, ~t~rrel,te lt}r mter hal predt~,at~t~n and the retlu~renlent ~1 explicating external predica- tion In terms ~1 retatl~n~', like (" . Note. h~, tha! nt~des m3. mS. and m8 in F!y. 7 ;ire 'structured illdl~.ldtl.~ls '" -a ,~rt ~1 molecular h;~se nixie. g. CON(~L USION. It ~s p~,sthle rn provide a tully tntenslonal, nt)n-fx~,'~ahle- w(~rlds ~malltlCS for ~NePS and similar ,~emanttc net~.v~rk f(wmal tsms. "l he tnt~t strat~,htlttr~.vard way ,s h~ use ~,letmmg's thet~rv ~)l ohlects, though thus the~rv has tile dx.,,ad~antage ,~t not being f~,r- mah/¢d. There are several extant formal ~.|emon~lan theorte~ that can t~ u.sed, t|t~;u~h eaLh has L.ertaln dt~tdvantages or pn~hl~mr;. Two hnes ,ff ,e~earch are currently being inv,~;tlgate~d: (1) Take ~.Nel~F, as :s. and prnvide a nov,', formal Memonglan theory I',~r Its semanth.: ~~,u,'tdatl~)n. Thin has not been discussed here. hut the wav to do this sh~luid be clear: from the p~.s.slhtlittes examined ab~lve. My t~v,'n theory (strspped of Its exten~mnal IragmentJ ~)r a m(Cdl~;:il~n (~| (',istaRetia'y~ rllel~rv ~'enl tile me,st pronll~ln~ appn~:u.he~. {2~ Modnlv S~.eP% '~ that ~n~ ,,I the extant lormal \lenn~;n~.)an ttl,t~rtc.s can ~ ,a~ used. S3,eP~ ~s, nn fact, ~.urrentIv |~nn[. m,~dlhed hv tile SNePS Research [intup-lor independent rea..-a,l'.S - 'n v,'avN that make it cheer to ('.=,,talleda's guise theory, hv :he tnt."(xlUCtlon of structured mdt~,uduals--"hase nodes" with descending arcs for indicating their "internal ~tructure". ACKNOWLEDGMENTS. This research was supported in part by ~ilSN'I' Buifalo Research DeveJupment Fund grant ~150-9216-F. I am grateful to Stuart C'. Shapiro, Hc,~tor-Nen Ca.stallreda. and the members of the SNePS Research Group for comments and discussion. 46 ' r - (Evening Star) ~ ~ (Morning Sta~ ~ @_orning Star [plane't.~ Fig. 7. A SNePS representation o£ "l'he Morning Star is the Evening Star' (m6) and 'The Morning Star is a planet' (m9) on Castaneda's theory. REFERENCES Barw,se. Jon. and h)hn I)errv. Situations and Attitudes (( 'amhrtdge. \la~s.: MIT Pres.s, 1983). t~hrow. Daniel (;..and Terry Wmognld.".An (Iverxlew oi KRI..a Knowledge Representat|on I.anguage.'" ('ognitive Science I( 1977)3-46. Brachman. Ronald J.. "~,'hat's ~n a ('oncept: Structural Foundattons for C, emantlc Networks, ~ Int. J. Man-Machine Studies c~ 1977)127-52. "()n tile I~pmtemolog~cal Y, tatu,~ ,f ~mantlc 3.et- works," in Findler 1979: 3-5o. "What IS-A Is and Isn't: An Analysis of "l'axont~msc I.inks m Semantsc Networks," (hm~pute~ I h(( )~t. 1983)30-3h. Castarteda, Ilectnr-Nen. '"l'hmkmg and tile Structure t~1 the World" [1972). Philosophia 4( 1974)3-4o" reprinted in 197,1 in ('~itica 6( 1972)43-8h. "lndi,.ldu`als and N,n-ldent~tv: A \e',~ I.t~k.'" American I'hil. ()tly. 12( 1975a ~131 .11 ~. "Identity and ~a mene~s.'" I'hilostwhia 5( 1975hH 21-5tL , I'hinking and Ih~ing tlh~rdret.ht: I). Retdel. I q75c). , "Perception, I-~liel, and the Structure of Physical ( )bject.s and Consc:ousness." Synthese 35( 1977)285-35 I. , "Fiction and Reality: "l'hesr t~as,c Connections." Poetica 8( 1979)31-62. "Reference. Reahtv. :,nd Perceptual Fields." Proc. and Addresses American Phil. Assoc. 5311981))763-823. Findler. N. V. (ed.), Associative Networks (New 'fork: ,Xcademlc P rer~s. 1"979). Ilendr~x. (iarv (i.. ~]-nc(,~l:n~ Knowledge in I)art~tl(med \et',v.rks,'" in Findler I,,~79: 51 c~2. Israel. I)avtd J., "Interpreting Netw~rk F~lrmahsrns,'" m 3,. (?erc~me led.). Computational /,ineuistics (()xlord: I~ergamon Pres.s, It183): I 13. Kneale. Wdham, and Martha Kneale. The I)evelopmrnt of I~:gic (()xlord: Clarendon Pres.s, 3rd printing. 19hh}. h, retzmann. \orman ( trans, and ed.L WiUiam of S~e~ wood's "" Int, o ductihn to Logic" (\linneafx~hs: Ln~,,. of ~linn. I're~,~. 19~¢,). \1aide, -~nth(m~, %, ,and ~,tuart ('. Shaplrt~. "lnten,:~tmal ('t~nceprs m I~rol',om t tuna I ~mantic Net w,~rks," Cugnitive Science ~( 19821291- 3.~ I. Me( ?arthv, J., "First ()rder "l'hes~nes .f Indiv:oual Concepts anti Pro- p~sit~ons," in I. I-. Ilaves. I). :'.tichie. and I.. M~kulich (eds.), Machine Intelligence 911 ;hlchester. Eng.: I-Ihs II~rw~.l. 1979):. 1 21~ .17. \1cl ~:rm,~tl. I )rev~. "-\r~lllcl`al Intelligence \leers Natural ~;l.upldltv.'" ~n I. Ileum, el`and fed.!, Mind Design: I'hiJo~ophy I'sychob~gy. .~tifici.I Intelli~,,rmc l('.mhrldge: ~.ll'l IJr~s~. 19~1 ~: 147, h4~. 'vle=nimE, ..\lexlus. "(~her (;et:enxtandsthet~ne" (191}.1), in R. Ilaller (ed.). Alexiu.i Meim,ng (,e~amtau.~gobe. \'~}l. II ((;raz. Austria: Ak`ademl~he I)rut.k u. \erlag,.anstalt, 1~71;: 481-535. I-nghsh translat.m ("The 'l'he~rv ,~t (}hle,~t'~") hv [. [.¢vl et el.. In R. XI. ( "h]shcdm (cal.). Rettli~m and Ihe /Itl~'kgr f~utht ,~j" I'henomenology ( New "f~rk: Free Prcss, 19h1)): 7h- I 17. \lontague. Richard. I",,~mttl I'hilo.s, why. ed. R. II. l'homa~m I New Ilaven: Tale 1 n~v. I're,~s. 1974). Perkins. Terence. Nonexistent t)bjerts (New llaven: Yale Lntv. Pre~s. 1 '~,~ ~). Oullhan. \1. R~'~, '%emant~c ~,lemt~rv,'" m \I. Minskv (ed.). Semantic lnfo~mation I'roces.sing ¢('amhr~d~e: ~,11"1" Pres,s, l~)hS): 227-e~6. Qulne. Wilhlrd Van l)rman. "()n What There I'C" m I'~om a Logical Point of ~'iew ( ( ?am hnd~e: ] la rye rd 1. n, v. I're~.s. 2 nd ed.. 198~)): 1-19. Rapaport. WiIham I.. Intentionality and the Structure of Existence. Ph.D. d~ss.. Indiana troy.. Iq7h. .... "S.lemonglan "l'het,rles and a Rus.~eihan Parad~x.'" Nous 12( 1978)153 ~11: err;de. Nofls 1311t~7t~H 25. • "'ltow ~,~ \lake the World I:tt ()ur I.an~uage: .-\n I-,~sav in \lelntm~t~,n ~mant~cs.'" (,ta:er I'hil. Studien 1-111981H-21. "Me~mm'g. I )efective ( )h.lects. and ( Psycho-II .,,~-`al Paradox." Grazer Phil. Studien 18( 1982)17-39. Crittcal 3.otice of Routley 1979, Ph/L and Phenomenological Research 4,.11 I q84a )539-52. • "l~lief Representat~(m ,and ()uasl-lnd~cators." "l'e~h. ReI~'~rt 215 ('q[ N3" I~,utf`ah, l)~pt. ~1 ('~mputer ~ence. Iq~q4h). . . . . . . . Re'.'le~ ,,I I.amt~ert',~ Meinong and the P~incipte t,J" lndet~etuh,nce. Tc~.h. Relx~rt 217 ~,U3.Y I~uffalo l).ept. ~t" ("ompuler ~,tlence. 1~8-1c); I~rtht~,,~mlng in J. Symb,~lic Logic. ........... '"1"~ I~ and Not to lie," Noes 1~ 19851. ,nd Stuart (:. Shapmx "Ouas~-Indextcal Reference in Pr(~rstslt:onai Semantic \et~.'orks.'" Proc. tOth Int. Conf. (h~mputational I,inguistics 11 ?( )1 .IN(; 84 ) ( \h~rr~stt~w n, N.l: A~.~x:. ', 7omputat:~mal I.mgu~st~cs. I'48.1): 65 71~. Routlev. Rtchard. I'xt~loring Meinong's Jungle and Beyond ((?an- I~erra: Austrahan Natl. Umv.. Research Scherzi ol ~k~tal ~ences, Dept. of Phdt~)phv, 1979). Shapiro, Stuart (:, "['he MIND System: A Data Structure for Semantic Information Processing, Re~x~rt R-837-PR (Santa Moni~: Rand Corporation. 1971). 47 _._, "A Net Structure for Semantic Information Storage, I)eductlon and Retrteval." Proc. IJC'AI 2(1971h)512-23. "Path-Based and N~le-Based Inference zn ~mantlc Networks," m I). Waltz (ed.). Theoretical Issues in Natural language Processing 2( 1978)219-25. .__, "The SNePS Semantic Network Proces.smg System," in Findler 1979: 179-203. , "Generalized Augmented Transition Network Gram- mars For Generation From Semantic Networks," American J. (~ompulational Linguistics 8( 1 q82)12-25. , "Sy/mmetrlc Relations, Intensional Individuals. and Vartable Binding." (forthcoming, 1985). W~xv,.ls. William A.. "What's m a I.mk: The Semantic.,; of Semantic Networks." m I). (i.)~brow and A. M. ('oHins (eds.). Represen- tation and I/nderslanding (New "fork: A~:ademtc Press, 1975): 35- 7~. Zalta. I'dv~ard..4bstt act Objects ( I h~rdret'ht: I ). Reldel. 1987,). 48
1985
6
Speech Acts and Rationality Philip R. Cohen Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Hector J. Levesque Department of Computer Science University of Toronto" 1 Abstract This pallet derives the ha.sis of a theory, of communication from a formal theov,.' of rational interaction. The major result is a <h, mon~t fallen t hat. ilh,c,tionary acts need not I)e primitive, and .ee, I uot he reco~'nized..\s a t,'st case. we derive Searle's con- dit ions on reqt,est in~ from pri,ciples of ralionality coupled with a ~;ric~,an theory of iml~erativ,.s. The theory is shown to dis- tingui.~h insincere or nonserious imperatives from tr~le requests. ['~xlensions to indirect .~peech acts. and ramifications for natural language ~ystcms are also brieily discussed. 2 Introduction ']'he tlnifyin~ tilt'me of m,wh c-trent pragmatics antl discourse re~earrh is that the c.herence .f dialogue is to he folnnd in tile iuleraclinn of the cottver~alll'~' 1~61rI.I. Thal is, a speaker is re- garded a~s planning his ,lllcrance,~ re achieve his goals, which n,ay involve in{h..lwing a hean'r by the ,,se of comm,micative or "speech" acts. (-)u receiving an lltler~tnce realizing such an action, the hearer altempls Io infer the ~peaker's goal(s) anti to qndeffland how the 11llerat|rv fnrthcrs them. The hearer then adopts new goals (e.~.. to re-pond to a reqllest, to clarify the pre- vious ~peaker'~ lllll'r~ince or ~.:f,al) and plan~ his r~wn utterances to acl,ie:'e those. :\ cotl,cel'?~alion enslle~, I This view of language a.~ p.rposefid art ion has pervaded ('om- putational I,inzui-~ics re~carch, and ha.~ re~,lted in numerous protoCyl~e systems [I, 2, 3..',. 9. 25, 27]. llowever, the formal foundations underlying 01n... %v~l.ems haw" heen unspecified or .nder~peril'ied. In this ,.late ,~[' affairs, one cannot characterize what a ~,y.',tem .~llould ih~ independently from what it does. This paper hl,gins to rectify this sit-ation by presenting a fl~rmalizalinn of rational interaction. ~pon which is erected tile he~itmin~'- r,f a theory of rein m~miralion attd ~peech acts. Inter- ;wtion is d~.riv~,d fr~,m prmcil~h,.~ of rational action for indivi,h,al a~enas. ~.. well as lwinciph's -[ helief and goal adoption among a~enls. The h~sis of a theory nf purposefi,l communication thus "F, ll~,w ,,I' th~ Canadian lr, sti~,~t~- f~)r A,'.wanc~d R-search. ~This re~,.areh was mad- W,~sdde ;n part hy a gilt from ~he Systems Dew.l- opm~.n~ ["~.md:~ti,,n. and in part t,y suFport fr-m ti~e r)efens~ Advanced R~.se~rrh ['roje.rts .Ag,ncy un.h'r C,~n~ra.ct Nf~I)t)3D.8.I-K-0078 wilh the .N~v:~| ['~lec~ronic Systems C,,mm~nd. The views and om¢lusions eon- tain~'d in thls document ~re ~hos~" of the ~uthor~ and should not be inter- preted ;~ representa, tive of the. omci~.| policies, ~ither expre~ed or implied, oi" the Defense ~dvanced Research Projects Agency or the United States (Jovernment. Mu~h nf this rrsearrh was done when the second a.uthor wa~ employed at the Falrehild ('~m,r~ and Instrument Corp. emerges as a consequence of principles of action. 2.1 Speech Act Theory Speech act theory was originally conceived a~s part of action the- ory. Many of Austin's [.l] insights about the nature of ~peech acts, felicity conditions, anti modes of lath,re apply equally well to non-communicative actions. Searle [2G] repeatedly mentions lhat many of the conditions he attributes to variol,s illocution- ary acls (such as requests anti qm,stions) apply more ~e:.,rally to non-communicative action. ]lowever, re~earcher~ have ~rad- ually lost ~ight of their roots. In recent work [3~ I illoc,ltior,a~" acts are formalized, antl a logic is proposed, in which propertie~ of IA's (e.g., "preparatory conditions" and "mode~ of achieve- ment') are primitively stip.laled, rather than derived front more h~ic principles of action. We helieve this approach misses sig- nificant generalities. "['hm paper ~hows how to derive properties of illocutionary acts from principh,s of rationality, .pdating the formalism of [10J. Work in Artificial Intelligence provided the first forntal gro.nding of speech act theory in terms of plannin~ and plan rerog~nitmn, cldminalin~ in Perra.h and \lh.n'~ [:2:~ I I I...ry of indirect speech acts. Xhwh ~,I" o~0r re~earch i~. in.~lfir~'d I,~ lhrir analyses, llowe~er, one major ingredien! ~I" their the.ry r:m be shown to he redundant in01 illocutionary acts. All do. in- ferential power nf the recolfnition of their dloc~itionary acts wa.s already available in other "operators'. Nevertheless, the natu- ral langlnage systems based on this approach [I. ,-3] always had to recognize which illocutionary act was performed in order to respond to a tnser's utterance. Since the illocutionary acts were unnecessary for achieving their ell'errs, so too wa.~ their re~'n~ni- tion. The stance that illocutionary arts are not primitive, and need not he re;og'nize(l, is a lih..ratmg one. ()nee taken, it l)ecomes apparent that many of the (lifl~cuhies in applying ~l),,ech act theory to discourse, or to computer systems, stem from taking these acts too seriously - i.e., too primitively. 3 Form of the argument We show that illocutionary acts need not be primitive hy de- riving Searle's conditions on requesting from an independently- motivated theory of action. The realm of communicative action is entered following Grice [13i -- by postulating a correlation between the ,ntterance of a sentence with a certain syntactic fea- ture (e.g., its dominant clause is an imperative) and a complex 49 propositional attitude expressing the speaker's goal. This atti- tude becomes true as a result of uttering a sentence with that feature. Because of certain general principles governing beliefs and goals, other causal consequences of the speaker's having the expressed goal can be derived. Such derivations will be "summa- rized" as lemmas of the form "If (conditions) are true, then any action making (antecedent) true also makes (consequent) true] These lemmas will be used to characterize illocutionary acts. though they are not themselves acts. For example, the lemma called REQUEST will characterize a derivation that shows how a heater's knowing that the speaker has certain goals can cause the hearer to act. The conditions licensing that chain will be col- lected in the REQUEST lemma, and will be shown to subsume those stipulated by Searle [261 as felicity conditions. However, they have been derived here from first principles, and without the need for a primitive action of requesting. The benefits of this approach become clearer as other illocu- tionary arts are derived. We have derived a characterization of the speech act of informing, and have used it in deriving the speech act of questioning. The latter derivation also allows us to disting~tish real questions from teacher/student questions, and rhetorical questions. However. for brevity, the discussion of the.,e speech acts has been omitted. Indirect speech acts can be handled within the framework. although, again, we cannot present the analyses here. Briefly, axioms similar to those of Perrauh and Allen {22] can be sup- plied enabling one to reason that an agent has a goal that q, ~iven that he also has a goal p. When the p's and q's are them- selves goals of the hearer (i.e.. the speaker is trying to get the hearer to do something), then we can derive a set of lemmas for i,,lirect requests. Many of these indirect request lemmas corre- spond to what have been called %herr-circuited" implicatures. which, it was suggested [211 underlie the processing of utterances of the form "Can you do X?'. "Do you know y?", etc. l,emma formation and lemma application thus provide a familiar model of-herr-circuiting. Furthermore. this approach shows how one ran use general purpose reasoning in concert with convention- alized b~rms (e.g., how one can reason that "Can you reach the salt" is a request to pass the salt), a problem that has plagnwd most theories of speech acts. The plan for the paper is to construct a formalism based on a theory of action that is sufficient for characterizing a request. Most of the work is in the theory of action, as it should be. 4 The Formalism To achieve these goals we need a carefl:lly worked out (though perhaps, incomplete) theory of rational action and interaction. "!'he theory wil~ be expressed in a logic whose mndet theory is ba.,ed (loosely) on a possible-worlds semantics. We shall propose a logic with four primary modal operators -- BELief, BMB, ~,f)AL. and AFTER. W~th these, we shall characterize what agents need to know to perform actions that art, intended to achieve their ~oals. The .zgents do so with Ihe knowledge that other agents operate similarly. Thus, agents have beliefs about .'her'~ gcals, and they have goals to influence others' beliefs and goals. The integration of these operators follows that of Moore {20l, who analyzes how an agent's knowledge affects and is affected by his actions, by meshing a possible-worlds model of knowledge with a situation calculus model of action [18]. By adding GOAL, we can begin to talk about an agent's plans, which can include his plans to influence the beliefs and goals of others. Intuitively, a model for these operators includes courses of events (i.e., sequences of primitive acts) " that characterize what has happened. Courses of events (O.B.e.'s) are paths through a tree of possible future primitive acts, and after any primitive act has occurred, one can recover the course of events that led up to it. C.o.e.'s can also be related to one another via accessiblity relations that partake in the semantics of BEL and GOAL. Fur- ther details of this semantics must await our forthcoming paper [17]. As a general strategy, the formalism will be too strong. First, we have the usual consequential closure problems that plague possible-worlds models for belief. These, however, will be ac- cepted for the time being. Second, the formalism will describe agents as satisfying certain properties that might generally he true, but for which there might be exceptions. Perhaps a process of non-monotonic reasoning could smooth over the exceptions, but we will not attempt to specify such reasoning here. Instead, we assemble a set of basic principles and examine their conse- quences for speech act use. Third, we are willing to live with the difficulties of the situation calculus model of action - e.g., the lack of a way to capture tnse parallelism, and the frame prob- lem. Finally. the formalism should be regarded as a de,~eription or specification Bran agent, rather than one that any agent could or should use. Our approach will be to ground a theory of communication in a theory of rational interaction, itself supported by a theory, of rational action, which is finally grounded in mental states. Ac- cordingly, we first need to describe the_behavior of BEL, BMB. GOAL and AFTER. Then, these operators will be combined to describe how agents' goals and plans influence their actions. Then. we characterize how having beliefs about the beliefs and goals of othe~ can affect one's own beliefs and goals. Finally, we characterize a request. To be more spe~iflc, here are the primitives that will be used, with a minimal explanation. 4,1 Primitives Assume p, q, ... are schema variables ranging over wffs, and a, b • • are schematic variables ranging over acts. Then the following are wlfs. 4.1.1 tVffs ~p {p v q} (AFTEI'~. a p} - p is true in all courses of events that obt,-,in from act a's happening';, (if a denotes a halting act). (DONI:'. a) - The event denoted by a has just happened. (AGTa x) - Agent xistheonly agent of act a a ~ b -- Art a I)r~cedes act b in the current course of events. 3 z p ,~here p contains a free occurrence of variable z. x-~.y True. False (BEL x p) - p foUows from X'S beliefs. {~OAL x p) -- p fotlotps from x's goals. {BMB x y p} .- p/~llows from x's beliefs about what is mutually believed by x and y. :P'w chls paper, the only events that will be considered &re primitive acts. 3Th&t is. p is true in ~.11 c.o e.'s resulting from concatenating the current c.o.e, with the c.o.e, denoted by a. 50 4.1.2 Action Formation If a, b, c, d range over sequences of primitive acts, and p is a wff. then the following are complex act descriptions: a:b -- sequential action a [ b -- non-deterministic choice (a or b) action p? -- action of positively testing p. def (IF p a b) -- conditional action = (p?:a) 1 (~pT;b), as in dy- namic logic. (UNTIL p a) -- iterative action d*~ (~p:a)';~p? (again, as in dynamic logic). The recta-symbol "1-' will prefix formulas that are theorems, i.e.. that are derivable. Properties of the formal system that will be assumed to hold will be termed Propositions. Propositions will be both formulas that should always be valid, for our forth- coming ~emantics, and rules of inference that should be sound. No attempt to prove or validate these propositions here, but we do so in It 7]. 4.2 Properties of Acts We adop! ,In' ,Isual axioms characterizing how complex actions behave .mh'r AFTER, a.s treated in a dynamic logic (e.g., [20]) namely, Proposition t Propert*es o/complez aet~ --~ (AFTER (AFTER (AFTER AFTER atttl ties: Proposition Proposition Proposition Propositlon Proposition a:b p) --- (AFTER a (AFTER b p)). a]b p) -= (AFTER a p) ^ (AFTER b p). p't q) -= p ^ q. DONE will have ~he following additional proper. 2 V act (AFTER act (DONE x act)) 4 $ Va [{DONE (AFTER a p)?:a) ~ p] 4 [lb. ~D,q then (DONE ~?:a) :~ (DONE ,')?;a) ,5 p -= {DONE p?} 6 (DONE [(p 3 q) ^ p]?} .~ (DONE q?) Our treatment of acts requires that we deal somehow with the "frame problem" [18]. That is, we must characterize not only what changes as a resuh of doing an action, but also what does not change. To approach this problem, the following notation will he convenient: Definition t (PRESERVES a p) d.f P ~ (AFTER a p) Of co.rse, all theorems are preserved. Temporal concepts are introduced will DONE (for past hap- penings) and <> (read "eventually'}. To say that p was true at ~(,me point in the past, we use 3a (DONE p?:a). <> is to he regarded in the "branching time* sense [I 1], and will be defined more rigorously in !17]. Essentially, OP is true iff for all infinite extensions of any course of events there is a finite prefix satis- fying p. OP and O~p are jointly satisfiable. Since OP starts "now ", the following property is also true, *(AFTER t (DONE t)), where t is term denoting a primitive act (or a sequence of primitive actsl, is ant always true since aft ;~t '~ay change the values of terms (e.g., an election changes the value of the term (PRESIDENT U.S.)) Proposition 7 t- p 30P Also, we have the following rule of inference: Proposition 8 I/I- a ~ fl then O(a v p) ~ O(3 v p) 4.3 The Attitudes Neither BEL, BMB. nor GOAL characterize what an agent actively believes, mutually believes (with someone else), or has as a goal, but rather what is imph'cit in his beliefs, mutual be- liefs, and goals, s That is, these operators characterize what the world would be like if the agent's beliefs and mutlml beliefs were true, and if his goals were made true. Importantly. we do not inch,de an operator for wanting, since desire~ m,ed not he consistent. We ass.me that once an agent has sorted o~lt his possibly inconsistent desires in deciding what he wishes to achieve, the worhls he will he striving for are consisteal. ~'on- versely recognition of an agent's plans n,'ed not, com, ider that agent's possibly inconsistent desires. F,zrthermore. there is al~o no explicit operator for intending. If an agent intends to bring about p, the agent is usually regarded as also being able to bring about p. By using GOAL, we will be able to reason about the end state the agent is aiming at separately from our reasoning about Iris ability to achieve that state. For simplicity, we assume the usual Hintikka axiom schemata for BEL [I,SI, and we introduce KNOW by definition: Definition 2 (KNOW x p) ~f p ^ (BEL x p) 4.3.1 Mutual Belief Human communication depends crucially on what is mutually believed [I, 6, 7, 9, 22, 23, 2.1]. We do not use the standard definitions, but employ (nMB y x p), which stands for y's belief that it is mutually believed between y and x that p. (BMB y x p} is true iff (BEL y [p A (BMD x y p)]). ~ BMB has the following properties: Proposition 9 (BMB y x pAq) =- (BMB y x p) A (DMB y x q) Proposition 10 (BMB y x pDq) 3 ((BMD y x p) 3 (BMB y x q)) Proposition 11 1/I-,~ 3 # then ~-(BMB y x ~) :3 (BMB y x J) Also, we characterize mutual knowledge as: Definition 3 (MK x y p)d.=f P ^ (BMB x y p) ^ (BMD y x p)r 5For an exploration of the issues involved in explicit vs. implicit belief, see ilel. SNotice that (BMB y x p) $ (BMB x y p). ~This definition is not entirely correct, but is adequate for present purposes. 51 4.3.2 Goals For GOAL, we have the following properties: Proposition 12 {GOAL x {GOAL x p)) ~ (GOAL x p) If an agent thinks he has a goal, then he does. Proposition 13 {BEL x {GOAL x p}} - {GOAL x p} Proposition 14 {GOAL x p} ^ {GOAL x p~q) {GOAL x q)8 The following two derived rules are also useful: Proposition 15 If i" o D ~ then ~'(GOAL x a) D (GOAL x ~) Proposition t0 Ilk- a A ;1 D "7 then I-{BMB y x (GOAL x ~)) ^ (BMB y x {GOAL x ~)} :~ (BMB y x {GOAL x "~)) More properties of GOAL follow. 4.4 Attitudes and Rational Action Next. we must characterize how beliefs, goals, and actions are related. "the interaction of BEL anti AFTER will be patterned after Moore's analysis ['20l. In particular, we have: Proposition IT v x. act (AGT a x) D (AFTER act (KNOW x (DONE act))) Agents know what they have done. Moreover, they think certain effects of their own actions are achieved: Proposition 18 (BEL x {RESULT x a p)) 3 (RESULT x a (BEL x p)). tvhere def Definition 4 (RESULT x a p) = (AFTER a p) ^ (AGT a x) The major addition we have made is GOAL. which interacts tightly with the other operators. We will say a rational agent only adopts goals that are achiev- able, and accepts as "desirable" those states of the world that are inevitable. To characterize inevitabiJities, we have Definition 5 (ALWAYS p) 4.~ Va (AFTER a p) This says that no matter what happens, p is true. Clearly, we want Proposition 19 lf~-r~ then ~- (BEL x (ALWAYS ,~)) That is, theorems are believed to be always true. Another property we want is that no sequence of primitive acts is forever ruled out from happening. Proposition 20 ~" Va (ACT a) ~ ~(ALWAYS ~(DONE a)), where (ACT a) ~f ~(AFTER a --(DONE a)) One important variant of ALWAYS is (ALWAYS x p) (rel- ative to an agent), which indicates that no matter what that aqent does, p is true. The definition of this version is: d~f Definition 6 (ALWAYS x p) = Va {RESULT x a p) A u:~eful instance of ALWAYS Is (ALWAYS pDq) ill which no matter what happens, p still implies q. We can now distinguish between p :~ q's being logically valid, its being true in all courses of events, and its merely being true after some event happens. SNotice that it pDq is true (or even believed} but (GOAL x pDq) is not true, we should not reach this conclusion since some act could make it laise. 4.4.1 Goals and Inevitabilities What an agent believes to be inevitable is a goal (he accepts what he cannot change). Proposition 21 (BEL x {ALWAYS p)) ~ (GOAL x p) and conversely (almost), agents do not adopt goals that they believe to be impossible to achieve -- Proposition 22 No futility -- (GOAL x p) ~(BEL x (ALWAYS ~p)) This gives the following useful lemma: Lemma I Inevitable Consequences (GOAL x p) A (BEL x (ALWAYS p~q )) D (GOAL x q) Proof: By Proposition 21, if an agent believes pDq is always true, he has it as a goal. Hence by Proposition 14, q follows from his goals, This lemma states that if one's goal is ac.o.e, in which p holds, and if one thinks that no matter what happens, pDq, then one's goal is a c.o.e, in which q holds. Two aspects of this property are crucially important to its plausibility. First, one must keep in mind the "follows from* interpretation of our propositional attitudes. Second, the key aspect of the connection between p and q is that no one can achieve p without achieving q. If someone could do so, then q need not be true in a c.o.e, that satisfies the agent's goals. Now, we have the following as a lemma that will be used in the speech act derivations: Lemma 2 Shared Recoqnition (BMB y x {GOAL x p)} A (BMB y x (BEL x (ALWAYS p~q))) 3 (BMB y x (GOAL x q)) The proof is a straightforward application of Lemma I and Propositions 9 and 10. 4.4.2 Persistent goals In this formalism, we are attempting to capture a number of properties of what might be called "intention" without postu- lating a primitive concept for "intend". Instead, we will combine acts, beqiefs, goals, and a notion of commitment built out of more primitive notions. To capture ,me grade of commitment than an agent might have towards his goals, we define a persistent goal. P-GOAL, to be one that the agent will not give up until he thinks it has been an:(stied, or until he thinks he cannot achieve it. Now, in order to state constraints on c.o.e.'s we define: d*f Definition T (PREREQ x p q) = Vc (RESULT x ¢ q) ~ 3 a (a ~ c) A (RESULT x a p} This definition states that p is a prerequisite for x's achieving q if all ways for x to bring about q result in a course of events in which p has been true. Now, we are ready for persistent goals: 52 dlt Definition 8 (P-GOAL x p) = (GOAL x p) ^ [PREREQ x ((BEL x p) v {BEL x (ALWAYS x ~p))) ~(GOAL x p)l Persistent goals are ones the agent will replan to achieve if his earlier attempts to achieve it fail to do so. Our definition does not say that an agent must give up his goal when he thinks it is satisfied, since goals of maintenance are allowed. All this says is that somewhere along the way to giving up the persistent goal, the agent had to think it was true (or belie~,e it was impossible for him to achieve). Though an agent may be persistent, he may be foolishly so beca,se he ha.~ no competence to achieve his goals. We charac- terize competence below. 4.4.3 Competence I'e.ple are ~omet imes experls in certain fiehts, as well as in their own bodily movements. For example, a competent electrician will form correct plans to achieve world states in which "elec- trical" .-tares of affairs obtain. Most aduhs are competent in achievimz worhl states in which their teeth are brushed, etc. We will say an agent is COMPETENT with respect to p if, whenever he thinks p will tnJe after some action happens, he is correct: def Definition 9 (COMPETENT x p} = Va (BEL x (AFTER x p)) 2) (AFTER a p} One property of competence we will want is: Proposition 23 Vx. a (AGT x a) (ALWAYS (COMPETENT x (DONE x a))), where Definltlon I0 (DONE x a) a---'f (DONE a) .'~ (AGT a x) That is. any person is always competent to do the acts of which he is the agent. ~ Of course, he is not always competent to achieve any particular effect. Finally. ~iven all these properties we are ready to describe rational agents. 4.5 Rational Agents i~elow are properties of ideally rational agents who adopt per- ~i.~tent gnals. First. a~ents are carefuh they do not knowingly and deliber- ately make their persistent goals impossible for them achieve. Proposition 24 (DONE x act) 2) {DONE x p?;act), where p %'J (P-GOAL x q) ~ ~(DEL x (AFTER act (ALWAYS x ~p))) v ~(COAL x (DONE x act)) l0 in other words, no deliberately shooting onessetf in the foot. Now, agents are cautious in adopting" persistent goats, since they must eventually come to some decision about their feasi- bility. We require an agent to either come up with a "plan ~ to Sl}ecause of Proposition 2. all Proposition 23 says is that if a competent agen,, believes his own primitiw act halts, it will. ~nNotice *hat tt is eruciad that p be true in ~he sane world in which the agent does act, hence the use ,if "p?;aet*. achieve them -- a belief of some act (or act sequence) that it achieves the persistent goal -- or to believe he cannot bring the goal about. That is, agents do not adopt persistent goals they could never give up. The next Proposition will characterize this property of P-GOAL. But, even with a correct plan and a persistent goal. there is still the possibility that the competent agent never executes the plan in the right circumstances -- some other agent has changed the circumstances, thereby making the plan incorrect. [f the agent is competent, then if he formulates another plan. it will be correct for the new circumstances. But again, the world could change out from under him. Now, just as with operating systems, we want to say that the world is "fair" - the agent will eventually get a chance to execl,te his plans. This property is also characterized in the following Proposition: Proposition 25 fa,r EzecuHon -- The agent u,dl prentually form a plan and ezeeute *t. believing it achieves his persistent goal in e,rcumstanees he believes to be appropriate for its sucees.~. V x (P-GOAL x q) 2) 0[3 act' (DONE x p?;act')] v [BEL x (ALWAYS x ~ql[}, where p 4=*¢ (nEL x (RESULT x act' q)) We now give a crucial theorem: Theorem I Consequences of a pers,stent goal -- If .~omeone has a pers*stent goal of bringing about p, and brmgm 9 ~l~ut p is usffhin his area of competence, then eventually either p becomes true or he wall believe there is nothing that can be done to achiet, e P ¥ y (P.GOAL y p) A (ALWAYS (COMPETENT y p)) D (> (p v (BEL y (ALWAYS y ~p})) Proof sketch: Since the agent has a persistent goal. he eventually will either find and execute a plan. or will believe there is nothing he can do to achieve the goal. Since he is competent with respect to p, the plans he forms will be correct. Since his plan act' is correct, and since any other plans he forms for bringing about p are also correct, and since the world is "fair', eventually either the agt,nt executes his correct plan, making p true, or the agent comes to believe he cannot achieve p. A more rigorous proof can be found in the Appendix. This theorem is a major cornerstone of the formalism, telling us when we can conclude •p, given a plan and a ~oal. and is used throughout the speech act analyses. [f an agent who is not COMPETENT with respect to p adopts p a.s a persistent goal, we cannot conclude that eventually either p will be true (or the agent will think he cannot bring it about), since the agent could forever create incorrect plans. [f the goal is not persistent, we also cannot conclude OP since the agent could give it up without achieving it. The use of ~ opens the formalism to McDermott's "Little Nell* paradox [19l. tt In our context, the problem arises as follows: First, since an agent has a persistent goal to achieve p, ~lLittle Nell is tied to the railroad tracks, and will be muhed by the neXt train. Dudley Doright is planning to save her. McDermott claims that, according to various A[ theories of planning, he never will, even though he always knows just what to do. 53 and we assume here he is always competent with respect to p, ~p is true. But, when p is of the form Oq (eg., <>(SAVED LITTLE-NELL)), <><>q is true, so <>q is true ~ well. Let us assume the agent knows all this. Hence, by the definition of P-GOAL, one might expect the agent to give up his persistent goal that <>q, since it is already satisfied! On the other hand, it would appear that Proposition 25 is sufficient to prevent the agent from giving up his goal too soon, since it states that the agent with a persistent goal must act on it, and, moreover, the definition of P-GOAL does not require the agent to give up his goal immediately. For persistent goals to achieve <>q. within someone's scope of competence, one might think the agent need "only" maintain <>q as a goal, and then the other properties of rationality force the agent to perform a primitive act. Unfortunately, the properties given so far do not yet rule out Little Nell's being mashed, and for two reasons. First, NIL denotes a primitive act -- the empty sequence, llence, doing it would satisfy Proposition 25, but the agent never does anything substantive. Second, doing anything that does not affect q also satisfies Proposition 25, since after doing the unrelated act, <>q is still true. We need to say that the agent eventually acts on q! To do so, we have the following property: Proposition 26 (P-GOAL y Oq) 3 O[(P-GOAL y q) v (rtgL y (ALWAYS y ~q))], That is. eventually the agent will have the persistent goal that q, and by Proposif ion 25. will act on it. If he eventually comes to believe he cannot bring about q, he eventually comes to believe he cannot bring about eventually q as well, allowing him to give up his persistent goal that eventually q. 4.6 Rational Interaction This ends our discussion of single agents. We now need to char- acterize rational interaction sufficiently to handle a simple re- qt,?st. First, we ,.liscuss cooperative agents, and then the effects of uttering sentences. 4.6.1 Properties of Cooperative Agents We describe agents as sincere, helpful, and more knowledgeable than others about the t~lth of some ~tate of affairs. Essentially, O.,,~e concepts capture (quite ~iml)li,qic) constraints on influegc- ing ~omeone clse's beliefs and goals, and on adopting the beliefs and goal~ of someone else ~ one'~ own. More refined versions are certainly desirable. Ultimately. we expect such properties of cooperative agents, a.s embedded in a theory of rational inter- action, to provide a formal description of the kinds of conver- sational behavior ~rice [1-t[ describes with his "conversational m;Lxims". First, we will say an agcnt i~ SINCERE with respect to p if whenever his goal is to get someone else to belietpe p, his goal is in fact to get that person to knom p. dec Definition tl (SINCERE x p) = (GOAL x (laEL y p)) D (GOAL x (KNOW y p)) An agent is HELPFUL to another if he adopts as his own persistent goal another agent's goal that he eventually do some- thing (provided that potential goal does not conflict with his own I. Definition 12 (HELPFUL x y) a,¢= 'Ca (BEL x (GOAL y (}(DONE y a))) ^ ~(GOAL x ~(DONE x a)) D (P-GOAL x (DONE x a)) Agent x thinks agent y is more EXPERT about the true of p than x if he always adopts x's beliefs about p as his own. def Definition 13 (EXPERT y x p) : (BEL x (BEL y p)) :3 (BEL x p) 4.0.2 Uttering Sentences with Certain aFeatures" Finally, we need to describe the effects of uttering sentences with certain "features" [141, such an mood. In particular, we need to characterize the results of uttering imperative, interrogative, and declarative sentences t: Our descriptions of these effects will be similar to Grices's [131 and to Perrauh and Allen's {22] %urface speech acts'. Many times, these sentence forms are not used literally to perform the corresponding speech acts (requests, questions, and assertions). The following is used to characterize uttering an imperative: Proposition 27 Imperatives: V x y (MK x y (ATTEND y x}) 3 (RESULT x [IN4PER x y "do y act" 1 (laMB y x (GOAL x (BEL y (GOAL x (P-GOAL y (DONE y act) ))))))) The ac: !IMPER speaker hearer 'p] stands for "make p t r~w" Proposition 27 states that if it is mutually known that y is at- tending to x, is then tile result of uttering an imperative to y to make it the case that y has done action act is that y thinks it is mutitally believed that the speaker*s goal is that y should think his goal is foe y to form the persistent goal of doing act. We also need to a~sert that IMPER preserves sincerity about the speak,'r's coals and helpfulness. These restrictions c,~uld be loosened, but maintaining them is simpler. Proposition 28 {PRESERVES [IMPER x y "do y act'] (BMB y x (SINCERE y (GOAL y p)))) Proposition 29 (PRESERVES [IMPER x y "rio y ;Jet'] (HELPFUL y xt) All t ',ricean "feature'-based theories of communication need to acco,mt for cases in which a speaker uses an utterance with a feat'tre, but does not have the attitudes (e.g.. beliefs, and goals) '2llowever, #e can only present the analysis of imperatives here. tall it is not mutually known that y is attending, for example, if the speaker i~ not speaking to an ~udience, then we do not say what the result of uttering an imperative is. 54 usually attributed to someone uttering sentences with that fea- ture. Thus, the attribution of the attitudes needs to be context- dependent. Specifically, proposition 28 needs to be weak enough to prevent nonserious utterances such as "go jump in the lake ~ from being automatically interpreted as requests even though the utterance is an imperative. On the other hand, the formula must be strong enough that requests are derivable. 5 Deriving a Simple Request In making a request, the speaker is trying to get the hearer to do an act. We will show how the speaker's uttering an imperative to do the act leads to its eventually being done. What we need to prove is this: Theorem 2 Result o[ an Imperative -- (DONE [(MK x y (ATTEND y x)) ^ (BMB y x (SINCERE x (GOAL x (P-GOAL y (DONE y act)))))^ (HELPFUL y x)l?; lIMPER x y "do y act']) :3 O(DONE y act) We will give the major steps of the proof in Fi~lre I, and point In their justifications. The full-fled~'ed proofs are h'ft to Ihe ,,nergetic reader. All formula.s preceded by a * are supposed t,, be Irue just prior to performing the IMPER, are preserved by il. an,I thus are implicitly conjoined to formulas 2 - 9. By their placement in the proof, we indicate where they are necessary for making t he deductions. E~entially. the proof proceeds as follows: If it is mutually known that y is attending to x. and y thinks it i~ mutually believed Ihat Ihe e-conditions hohl. then x's ,lltering an imlwrative to y to do some action results in formula (2). Since h i~ mutually believed x is sincere about his goals, then (:~) it is miltually believed his goal tndy is that y form a persistent goal to ,Io the act. Since everyone is always competent to do acts of which they are the agent. (.1) it is mutltally believed that the act will eventually be done, or y will think it is forever impossible to do. But since no halting act is forever impossible to do, it is (.3) mutually believed that x's goal is that y eventually do it. Ih, nee, 16) y thinks x's ~oa] is that y eventually do the act. Now, ~ince y is helpfillly disposed towards x, and has no objections Io doing the act. 17) y takes it on as a persistent goal. Since he is alwa.w competent about doing his own arts, 18) eventually it ~ill I,.,Ione or he will think it impossible to do. Again. since it is n(,I f~)rever impossible. (3) he v, ill eventually do it. W,. have shown how the p,.rforming of an imperative to do an act leads to the act's evemually being done. We wish to create a number of lemmas from this proof (and others like it) to characterize iilocutionary acts. 8 Plans and Summaries 6. t Plans A plan for agent "x" to achieve some goal "q" is an action term ~a" and two sequences of wits ".no', ~Pl".... "pt," and "q0", "qz", ... ~qk" where "qk" is ~q" and satisfying I. I- (BEL x (poApt A ...Ap~} (RESULTxa qoaptA...APk ))) 2. h (BEL x (ALWAYS (p~a Ch-t) D q,))) i=l,e....k In other words, given a state where "x" believes the "pi ~, he will believe that if he does ~a" then "q0" will hold and moreover. given that the act preserves pi, and he believes his making "qi-i ~ true in the presence ofpi will also make "qi* tale. Consequently, a plan is a special kind of proof that I- (BEL x ((Po^.-. A Pk) ~ (RESULT x a q))) and therefore, since (BEL x p) D (BEL x (BEL x p)) and (BEL x (p ~ q)) D ((BEL x p) D (BEL x q)). are axioms of belief, a plan is a proof that h (BEL x (p. A ...^ p~)) ~ (BEL x (RESULT x a 'l)) Among tile corollaries to a plan are }- (BEL x ( (Po a ... ^ p,) ~ (RESULT x a q,))) i=[ .... k and }- (BEL x ( (p," a ... a Pi) ~ (ALWAYS q~-i D qi))) i: 1 .... k ]=l" .... k There are two main points to be made about the~e corollaries. First of all, since they are theorems, the implications can be taken to be believed by the agent "x" in every, state. In this sense, these wits express general methods believed to achieve certain effects provided the assumptions are satisfied. The sec- ond point is that these corollaries are in precisely the form that is required in a plan and therefore can be used as justification for a step in a filture plan in much the same way a lemma becomes a single step in the proof of a theorem. 6.2 Summaries We therefore propose a notation for describing many ~t,.p~ of a plan as a single summarizing operator. A 3ummary consists of a name, a list of free variables, a distingafished free variable called the agent of the summary (who will always be list,,d tirst), an Effect which is a wff, a optional Body which is either an action or a wff and finally, an optional Gate which is a wff. The understanding here is that summaries are associated with agent and for an agent "x" to have summary "u". then there are three cases depending on the body of "u': I. If the Bodyof "u" is a wff, then (BEL x (ALWAYS (Gate ^ Bod~) ~ {Gate ^ Effect})) Is 2. If the Body of "u" is an action term, then I- (BEL x (Gate ~ (RESULT agent Bod~ (Gate A Effect}))) :60f course, many actions change the truth of their preconditions. H~ndllng such actions and preconditions i$ straightforward. 55 I. 2. Given P27, P3, P4, I 3. Pll, P12, 2 (DONE [(MK x y (ATTEND y x)) A (.conditions)]?; lIMPER x y "do y act*]) (BMB y x (GOAL x (BEL y (GOAL x (P-GOAL y (DONE y act)))))) A *(BMB y x (SINCERE x (GOAL x (P-GOAL y (DONE y act))))) (BMB y x (GOAL x (P-GOAL y (DONE y act)))) ^ *(BMB y x (ALWAYS (COMPETENT y (DONE y act)))} (BMB y x (GOAL x O[(DONE y act) v 4. (BEL y (ALWAYS ~(DONE y aet)))l)) ^ TX, Plf, 3 ,(BMB y x ~(ALWAYS ~(DONE y act))) 5. (BMB y x (GOAL x O(DONE y act))) A P160 P20, P8, 4 6. (BEL y x (GOAL x O(DONE y act))) ^ Def. BMB (HELPFUL y x) T. (P-GOAL y x (DONE y act)) ^ Def. of HELPFUL, MP • (ALWAYS (COMPETENT y (DONE y act))) 8. <>[(DONE y act) v (BEL y (ALWAYS ~(DONE y act)})l ^ T1 • ~(ALWAYS ~(DONE y act)) 9. <>(DONE y act} P20, P8 Q.E.D. Figure 1: Proof of Theorem 2 -- An imperative to do an act result~ in its eventually bein 9 done. 14 One thing worth noting about summaries is that normally the wiTs used above ~" (BEL x (Ga:e D ...)) will follow from the more general wff I- (;ate D ... llowever, this need not be the ca,~e and different agents could have different summaries (even with the same name). Saying that an agent has a summary is no more than a convenient way of saying that the agent always believes an implication of a certain kind. 7 Summarization of a Request The following is a summary named REQUEST that captures steps 2 through steps 5 of the proof of Theorem 2. [REQUEST x y act]: Gate: it) (BMB y x (SINCERE x (GOAL x (P-GOAL y (DONE y act))))) ^ (2) (BMB y x (ALWAYS (COMPETENT y (DONE y act)))) (3) (BMB y x ~(ALWAYS ~(DONE y act))) Bo~i~. (BMB y x (GOAL x (BEL y (GOAL x (P-GOAL y {DONE y act))))}) Effect: (BMB y x (GOAL x O(DONE y act))) This summary allows us to conclude that any action preserv- ing the Gate and making the Bod!/true makes the Effect true. Conditions (2) and (3) are theorems and hence are always pre- served. Condition (1) was preserved by assumption. Searle's conditions for requesting are captured by the above. Specifically, his "propositional content" condition, which states that one requests a future act, is present as the Effect because of Theorem 2. Searle's first "preparatory" condition -- that the hearer be able to do the requested act, and that the speaker think so is satisfied by condition (2). Searle's second prepara* tory condition -- that it not be obvious that the hearer was going to do the act anyway -- is captured by our conditions on persistence, which state when an agent can give up a persistent goal, that is not one of maintenance, when it has been satisfied. Grice's "recognition of intent* condition [12, 13] is satisfied since the endpoint in the chain (step 9) is a goal. Hence, the speaker's goal is to get the hearer to do the act hy means, in part, of the (mutual) recognition that the speaker's goal is to get the hearer to do it. Thus, according to Grice, the speaker has meant,,,, that the hearer should do the act. Searle's revised Gricean condition, that the hearer should "understand" the lit- eral meaning of the utterance, and what illocutionary act the utterance "counts as* are also satisfied, provided the summary is mutually known, le T. 1 Nonserious Requests Two questions now arise. First, is this not overly complicated? The answer, perhaps surprisingly, is "No'. By applying this REQUEST theorem, we can prove that the utterance of an im- perative in the circumstances specified by the Gate results in the Effect, which is as simple a propositional attitude as anyone would propose for the effect of uttering an imperative -- namely that it is mutually believed that the speaker's goal is that the hearer eventually do the act. The Bod V need never be considered 16~'he further elaboration of this point that it deserves is outside the ~cope . ot this paper. 56 unless one of the gating conditions fails. Then, if the Body is rarely needed, when is the "extra" em- bedding (GOAL speaker (BEL hearer ...)} attitude of use? The answer is that these embeddings are essential to preventing nonserious or insincere imperatives from being interpreted un- conditionally as requests. In demonstrating this, we will show how Searle's "Sincerity ~ condition is captured by our SINCERE predicate. The formula (SINCERE speaker p) is false when the speaker does something to get the hearer to believe he, the speaker, has the goal of the bearer's believing p, when he in fact does not have the goal of the heater's knowing that p Let us see see how this would he applied for "Go jump in the lake', uttered idiomati- cally. Notice that it could be uttered and meant as a request, and we should be able to capture the distinction between serious and nonserious uses. In the case of uttering this imperative, the content of SINCERE. p p =((:OAL speaker (P-GOAL hearer (DONE hearer/JUMP-INTO Laker]))). Assume that it is mutually known/believed that the lake is frigidly cold (any other conditions leading to -,.{GOAL x p) would do as well. e.g., that the hearer is wearing his best suit, or that there is no lake around). So, by a reasonable axiom of goal formation, no one has goals to achieve states of affairs that are objectionable (assume what is "objectionable" involves a weighing of alternatives). ~o, it is mutually known/believed that ~(GOAL speaker (DONE hearer [JUMP-INTO Laket])), and so the speaker does not believe he has such a goal. l'l The consequent to the implication defining SINCERE is false, and because tile result of tile imperative is a mutual belief that the speaker's goal is that the hearer think he has the goal of the bearer's jumping into the lake, the antecedent of the implica- tion is true. Hence, the speaker is insincere or not serious, and a request interpretation is blocked, is In the case of there not being a lake around, the speaker's goal cannot be that the hearer form the persistent goal of jumping in some non-existent lake. since by the 3/0 Futility property, the hearer will not adopt a goal if it is unachievable, and hence the speaker will not form his g~al to achieve the unachievable state of affairs (that the hearer adopt a goal he cannot achieve). }tence, since all this is mutually believed, using the same argument, the speaker must be insincere. 8 Nonspecific requests The ability conditions for requests are particularly simple, since as long as the hearer knows what action the speaker is referring to. he can always do it. He cannot, however, always bring about some goal world. An important variation of requesting is one in which the speaker does not specify the act to be performed; he merely expresses his goal that some p be made true. This will be captured by the action lIMPER y 'p] for ~make p true*. Here, tTThe speaker's expressed goat is that the hearer form t persistent gold to jump in the lake. But. by the /neeitails Coassqasaees lemma, given that a c.o.e, satisfying the speaker's goal also hu the heater's eventually jumping in (since the hearer knows what to do), the speaker's goal is also • c.o.e, in which the hearer eventually jumps in. In the same way, the speaker's goal would also be that the hearer eventually gets wet. I*11owever, we do not say what else might be derivable. The speaker's true goals may have more to do with the manner of his action (e.g., tone of voice), than with the content. All we have done is demoasnurata formally how • hearer could determine the utterance is not to be talteo ~r, face value. in planning this act, the speaker need only believe the hearer thinks it is mutually believed that it is always the case that the hearer will eventually find a plan to bring about p. Ahhough we cannot present the proof that performing an [IMPER x y "p] will make Op true, the following is the illocutionary summary of that proof: [NONSPECIFIC-REQUEST x y p]: Gate: (BMB y x (SINCERE x (GOAL x (BEL y (GOAL x (P-GOAL y p)))))) A (BMB y x (ALWAYS (COMPETENT y p))) (BMB y x (ALWAYS ~-7 act' (DONE y q?;act'), where q ~( (BEL y (RESULT y act' p)))) Body:. (IJMB y x (GOAL x (BEL y (GOAL x (P-GOAL y p))))) Effect: (nMB y x (GOAL x OPt) Since the speaker only asks the hearer to make p true. the ability conditions are that the hearer think it is mutually be- lieved that it is always true that eventually there will be some act such that the hearer believes of it that it achieves p (or he will believe it is impossible for him to achieve). The speaker need not know what act the hearer might choose. 9 On summarization Just as mathematicians have the leeway to decide which proofs are useful enough to be named a.s lemmas or theorems, so too does the language user. linguist, computer system, and speech act theoretician have great leeway in deciding which summaries to name and form. Grounds for making such decisions range from the existence of ilfocutionary verbs in a particular lan- guage, to efficiency. However. summaries are flexible -- they allow for different languages and different agents to carve up the same plans differently. ,o Furthermore, a summary formed for efficiency may not correspond to a verb in the language. Philosophical considerations may enter into how much of a plan to summarize for an illocutionary verb. For example, most illocutionary acts are considered successful when the speaker has communicated his intentions, not when the intended effect has taken hold, This acgues for labelling as Effects of summaries in- tended to capture illocutionary acts only formulas that are of the form (BMI3 hearer speaker (GOAL speaker p)), rather than those of the form (BMB hearer speaker p) or (BEL hearer p), where p is not a GOAL-dominated formula. Finally, summaries may be formed as conversations progress. The same ability to capture varying amounts of a chain of inference will allow us to deal with muhi-utterance or muhi- agent acts, such as, betting, complying, answering, etc., in which there either needs to be more than one act (a successful bet r.quires an offer and an acceptance), or one act is defined to require the presence of another (complying makes sense only in the presence of a previous directive). For example, where REQUEST captured the chain of inference from step 2 to step 5, one called COMPLY could start at 5 and stop at step 9. tSRemember, summaries are actually beliefs of agents, and those beliefs need oct be shared. 57 Thus, the notion of characterizing illocutionary acts as lemma- like summaries, i.e., as chains of inference subject to certain conditions, buys us the ability to encapsulate distant inferences at "one-shot'. 9.1 Ramifications for Computational Models of Language Use The use of these summaries provides a way to prove that various short-cuts that a system might take in deriving a speaker's goals are correct. Furthermore, the ability to index summaries by their Bodies or from the utterance types that could lead to their application (e.g., for utterances of the form "(.',an you do <X> ~) allows for fast retrieval of a lemma tlmt is likely to result in goal recognition. By an appropriate organization of summaries [5], a system can attempt to apply the most comprehensive sum- maries first, and if inapplicable, can fall back on less compre- hensive ones, eventuMly relying on first principles of reasoning about actions. Thus. the apparent difficulty of reasoning about speaker-intent can be tamed for tile "short-circuhed ~ cases, but more general-purpose reasoning can deployed when necessary. IIowever. the conil)lexities of rea.~oning about others' beliefs and goals remains. 10 Extensions: Indirection Indireciion will be modeh'd ill tills framework a.s tile derivation of propositions (lUlling with the speaker's goals that are not stated as such by tile initial propositional attitude. For example, if we can conchlde from IBMB y x (GOAL x (GOAL y Nil that (BMB y x (GOAL x (GOAL y 0 q))), where pdoes not entail q, then. "loosely', we will say an indirect request has been made by x. (;iven the properties of O. (GOAL x p) D (GOAL x <C>P) is a dworcm. (GOAL x p) an(l ((;()At, x -li) ar~" mutually un- ~ati~[ial)le, hilt (COAL x OP) and (GOAL x O~p) are jointly ~ali~liahh'. ["(}r examllh ", ((;OAL BILL OHAVE BILL HAM- MERI))) and (GOAL BILL <~(HAVE JOHN HAMMERI)) could both be part of a description of Bill's plan for John to get a hammer and give it to him. Such a plan could be triggered by Bill's merely saying "C, et tile ilammer" in the right circum- stances, such as when Bill is on a ladder plainly holding a nail. :0 A subsequent paper will demonstrate the conditions under which such reasoning is ~ound. I1 Concluding Remarks rhi~ i)alier tia.~ demonstrated tilat all illocutionary acts ne,'d ant t),' primitive. At least some can be derived from more basic priuciph.s of rational lotion, and an account of tile propositional attitudes affected by the uttering of sentences wittl decl.'u-ative, interrogative, and imperative moods. This account satisfies a number of criteria for a good theory of illocutionary acts. * Most elements of :he theory are independently motivated. The ~heory of rational action is motivated independently from any notions of communication. Similarly, the proper- ties of cooperative agents are also independent of commu- nication. l°Notice thllt molt theoritqt Ot Ipeech gta would treat the above utterance u Bed I I direct request. We do not. The characterization of the result of uttering sentences with certain syntactic moods is justified by the results we derive for illocutionary acts. as well as the results we cannot de- rive (e.g.. we cannot derive a request under conditions of insincerity ). Summaries need not correspond to illocutionary verbs in a language. Different languages could capture different parts of the same chain of reasoning, and an agent might have formed a summary for purposes of efficiency, but that sum- mary need not correspond to any other agent's summary. The rules of combination of illocutionary acts (character- izing, for example, how mnltiple assertions could consti- tute the performance of a request) are now reduced to nlles for combining propositional contents and attitudes. Thus, multi-utterance illocutionary acts can be handled by accu- mulating the speaker's goals expressed in multiple titter- antes, to allow an illocutionary theorem to be applied. Multi-act utterances are also a natural outgrowth of l|liS ap- proach. There is no rea.~on why one cannot apply mulliple illocutionary sunlniaries tO tile res0ill of utlt, ring a S¢'lllen¢¢'. Those sllmmaries, however, need not ¢'orre~pond Io illoc0f tionary verbs. The theory is naturally extensible to indirection (to lie ar- gued for hi another paper), to other illoc.tio.ary act, such u questions, commands, informs, a~sertions, and to tile act of referring [gl. Finally. allllougti illocutionary act rerog'nition may h,, ~lricily unntwcssary, given the complexily of o01r proofs, it is likely to he loser011. I']~s,.nliallv. s01etl rec~l~nilhm would ;lillOlill~ to lh(. application (if ill,lc01tl*lnary Sllnlllllries llleort'nl.~ Io di.~cover the speaker'~ I~(ml(s L 12 Acknowledgements We wo.ld like to thank Tom Blenko, Ih.rb (:lark, Michul (,eorg,.lr, David I~r~el, Bob Moore, (;(,off .NUli|ierg', Fernan(|o [)ereira. flay Penault, .":,tan Rosenschein, Ivall ~ag, and ,~loshe Vacdi for valuable dise.ssions. 13 References 1. AIh'n..!. F. A llhin-lla-'~ed atll)roa~'h I(i Sll,,0.ch act rrc.~nh.ion. "r,.,ctinic:ll I~.,.port 17.1. Di'p;lrtnit'!it of ('ornpill.('r ~cil'nce. llilivei~ity ()f 'r,)roiito, January. ll.)]'¢.). 2. Allen..I. [:., Frith. A. M.. <[" l,il,nan. I)..I. ARt ;(iT: The Rochester dialogue system. Proceedings of the .Vat,..,d Conference on Artificial Intelligence, Phtsh,r~h, I),'nn~yl- vanla, 1982. ¢I,-70. 3. Appelt, D. Planning Natural Language Utterances to S(itisfy Multiple Goals. Ph.D. Th.. Stanford University, Stanford, California, December 1981. 4. Austin, J. L.//ol. to do thinfs ~ith wo,da. Oxford University Press, London, 1962. 58 5. Bracbman. R., Bobrow, R., Cohen, P., Klovatad, J., Wel>- bet, B. L., & Woods, W. A. Research in natural language understanding. Technical Report 4274, Bolt Beranek and Newman Inc., August, 1979. 6. Bruce, B. C., & Newman, D. Interacting plans. CognRiee Science ~, 3, 1978, pp. 195-233. T. Clark. H. H., & Marshall, C. Definite reference and mutual knowledge. In Elements of Discourse Understanding, Aca- demic Press, Joshi, A. K., Sag, !. A., & Webber, B., Eds., New York. 1981. 8. Cohen, P. R. The Pragmatics of Referring and the Modality of Communication. Computational LinquMtics lO, 2, 198,1, pp. 97-1.16. 9. Cohen, P. R. On Knowin 9 what to Say: Plannin 9 Speech Acts. Ph.D. Th., University of Toronto. Toronto, January 1978. Technical Report No. 118, Dept. of Computer Sci- ence. lO, ( :ohen, P. R.. & Levesque, II. J. Speech Acts and the Rerog- nition of Shared Plans. Proe. ,[ the Third Iliennial Con- ference, Canadian Society for (~omputa!ional Studies of In- telligence, Victoria. B. (;., May. 1980, 263-271. II. Emerson, E. A., and Ilalpern. J. Y. "Sometimes" and "Not Never" Revisited: On Branching versus [.inear Time. ACM Sympa.~ium on Prinr~ple.~ of t)rt~jrammin9 Lanquaqes, 1983. 12. (;rice. l{. I'..\|caning. Phdo,,ophiral Ret,ietp 66, 1957, pp. 377-3,g8. 13. Grice. II. 1'. Utterer'.~ Meaning anti Intentions. t'hilo.~ophi- cal Reriew 63, 2. 1969, pp. 1.17-177. 14. Grice, t|. P. Logic and conversation. In t:ole., P. and Mar gan, J. [,., Eds.,Syntaz and Semantics: Speech Acts , Ace. demic Press, New York,1975. 15. Halpern, J. Y., and Moses. Y. O. A tluide to the Modal Logics of Knowledge anti Belief. Pr~. a/the Ninth Inter. national Joint (;on[erenre on .4rtl]ir:al intelligence, J.J( :AI, Los Angeles, ('alif.. Augnst, 1985. Levesque, tlector, J. A logic of implicit and explicit belief. Proceedings of the National t,'ofl/erence a/ the American As- ~ciation for Artificial Intelligence, ~ustin, Texan, 198.1. esque, H. J., & Cohen, P. R. A Simplified I,ogic of In- ~eraction. in preparation 18. McCarthy. J.. A: Ilayes. P..1. ~ome Philo~.phical ['rnhlems from the :-;tandpoint of ..\rtifi¢ial l.h'lli~,ehce, In 3t¢~rhl,e intelh'fence .i American El.~evier, B. Mehzer & D. Michh'. Eds., New York. 1~;9. I9. McDermott, D. A temporal logic for reasoning about pro- cesses and plans. Cognitive Science ~, 2, 1982, pp. 101-|55. 20. Moore, R. C. Reasoning about Knowledge and Action. Technical Note 191, Artificial Intelligence Center, SR! In- ternational, October, 1980. 21. Morgan, J. L. Two types o[ convention in indirect speech acts. In Syntaz and Semantics, Volume 9: Pragmaties, Academic Press, P. Cole. Ed., New York, 1978, 261-280. 22. Perrault, C. R.. & Allen, J. F. A Plan-Based Analysis of Indirect Speech Acts. American Journal of Computational LinguiaticJ 6, 3, 1980, pp. 167-182. 25. Perrauit, C. R., & Cohen, P. R. It's for your own good: A note on inaccurate reference. In Elements of Discourse Understandin9, Cambridge University Press, Joshi, A., Sag, i., & Webber, B., Eds., Cambridge, Mass., 1981. 24. Schiffer, S. Mctmin 9. Oxford University Press, London. 1972. 25. Schmidt, D. F., Sridharaa, N.S., & Goodson, J. L. The plan recognition problem: An intersection of artificial intel- ligence ant| psychology. Artificial Intelligence 10, 1979. pp. 45-83. 26. Searle, J. R. Speech acts: ..In essay in the philosophy of language. (?ambridge University Pre~s, (:ambridge, 1969. 27. Sidner, (',. 1,., Bates, M., lgobrow. R. ,1., Brachman, R. J., Cohen, P. R.. Israel, D. J., Wehber. B. l,., & Woods, W. A. Research in knowledge representation for natural language undecstaz,liog. Annual R,'p~)rt .1785, Bolt, Beranek and Newman Inc., November, 1981. 28. Vanderveken. I). A Model-Theoretic Semantics ['or illocu- tionary Force. Logique et ,4n,dy.~e ~'6, 10::-I0.l, 19~q'l, pp. 3,~9-39.~. 59 13 Appendix Proof of Theorem l: First, we need a lemma: Lemma $ Va (DONE x [(BEL x (AFTER & p)) ^ (COMPETENT x p)[?;a) :) p Proo/: L o 3, 4. 5. Q.E.D. Va {DONE x [(BEL x {AFTER & p)} A (COMPETENT x p)l?;a) Ass [BEL x (AFTER x p)) A (COMPETENT x p) D {AFTER a p) Def. of COMPETENT, MP Ya (DONE x {AFTER & p)?;a} 2, P4 p 3, P3 Ya {DONE x [(BEL x (AFTER a p)} A {COMPETENT x pJl?;a) D p Impl. lntr. Theorem I.. Vy (P-COAL y p) A (ALWAYS (COMPETENT y p)) D O(p v {BEL y {ALWAYS ~p))) Proo~ I, 2. 3. 4. Q.E.D. {P.GOAL y (DONE y act}} A {ALWAYS {COMPETENT y {DONE y actJ}} O{3a {DONE y [(BEL y (AFTER a p}}l?:a} v {BEL y (ALWAYS ~p}}} O{P v {BEL y (ALWAYS ~p}}} [P-GOAL y (DONE y act}) ^ (ALWAYS (COMPETENT y {DONE y act}J} :) O(P v {BEL y (ALWAYS ~p))) ASS. I, P25, MP L3, P8, :2 Impl. Intr., 3 60
1985
7
Ontological Promiscuity Jerry R. Hobbs Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract To facilitate work in discourse interpretation, the logical form of English sentences should be both close to English and syntacti- cally simple. In this paper i propose s logical notation which is first-order and uonintensional, sad for which semantic tnmsla- tion can be naively compositional. The key move is to expand what kinds of entities one allows in one's ontology, rather than complicating the logical notation, the logical form of sentences, or the semantic translation process. Three classical problems - opaque adverbials, the distinction between de re and de ditto belief reports, and the problem of identity in intensional con- texts - are examined for the dil~cuities they pose for this logical notation, and it is shown that the difficulties can be overcome. The paper closes with s statement about the view of semantics that is presupposed by this appro,-'h. 1 Motivation The real problem in natural language processing is the inter- pretation of discourse. Therefore, the other aspects of the total process should be in the service of discourse interpretation. This includes the semantic translation of sentences into s logical form, and indeed the logical notation itsel£ Discourse interpretation processes, as ! see them, are inferential processes that manipu- late or perform deductions on logical expressions encoding the information in the text and on other logical expressions encoding the speaker's and helper's background knowledge. These con- siderations lead to two principal criteria for • logical notation. Criterion I: The notation should be as close to English as possible. This makes it easier to specify the rules for translation between English and the formal language, mad also makes it easier to encode in logical notation facts we normally think of in English. The ideal choice by this criterion is English itself, but it fails monumentally on the second criterion. Criterion lh The notation should be syntactically simple. Since discourse processes are to be defined primarily in terms of manipulations performed on expressions in the logical nota- tion, the simpler that notation, the easier it will be to define the discourse operations. The development of such a logical notation is usually taken to be a very hard problem, i believe this is because researchers have imposed upon themselves several additional constraints - to adhere to stringent ontological scruples, to explain a number of mysterious syntactic facts ms a by-product of the notation, and to encode efficient deduction techniques in the notation. Most representational difficulties go •way if one rejects these constraints, and there are good reasons for rejecting each of the constr~nts. Ontological scruples: Researchers in philosophy and lint~uis- tics have typically restricted themselves to very few (altho*Igh • strange assortment of) kinds of entities - physical objects, numbers, sets, times, possible worlds, propositions, events, and situations - mad all of these but the first have been controversial. Quine has been the greatest exponent of ontological chastity, ills argument is that in any scientific theory, "we adopt, at [east in- sofas* as we are reasonable, the simplest conceptual scheme into which the disordered fragments of our experience can be fitted and arranged.* (Quine, 1953, p. 16.) But he goes on to say that "simplicity ... is not a clear and unambiguous idea; and it is quite capable of presenting a double or multiple standard." (Ibid., p. 17.) Minimising kinds of entities is not the only way to achieve simplicity in a theory. The aim in this enterprise is to achieve simplicity by minimizing the complexity of the rules in the system. It turns out this can be achieved by multiplying kinds of entities, by' allowing as an entity everything that can be referred to by a noun phrase. Syntactic explanation: The argument here is easy. It would be pleasant if an explanation of, say, the syntactic behavior of count nouns and mass nouns fell out of our underlying onto- logical structure at no extra cost, but if the extra cost is great complication in statements of discourse operations, it would be quite unpleasant. In constructing a theory of discourse interpre- tation, it doesn't make sense for us to tie our hands by requiring syntsctie explanations as well. The problem of discourse is at least an order of maguitude harder than the problem of syntax, and syntax shouldn't be in the driver's seat. Efficient deduction: There is • long tradition in artificial intelligence of building control information into the notation. and indeed much work in knowledge representation is driven by this consideration. Semantic networks and other notational sys- tems built ,round hierarchies (Quillian, 1068; .~immons, 1973; Hendrix, 1975) implicitly assign a low cost to certain types of syllogistic remmning. The KL-ONE representation language (Schmolze and Brat.brunn, 1982) has a variety of notational de- vices, each with an associated efficient deduction procedure. Hayes (1979) has argued that frame representations (Minsky, 1975; Bobrow and Winogrsd, 1977) should be viewed am sets of predicate calculus axioms together with a control component for drawing certain kinds of inferences quickly. In quite a differ- ent vein, Moore (1980) uses a possible worlds notation to model knowledge mad action in part to avoid inefficiencies in theorem- proving. By contrast, l would argue against building et§ciencies into the notation. From a psychological point of view, this allows us to abstract away from the details of implementation on a partic- ular computational device, increasing the generality of the the- ory. From a technological point of view, it reflects a belief that we must first determine empirically the must common classes of inferences required for discourse processing and only then seek algorithms for optimizing them. In this paper I propme s flit logical notation with an ontolog- ically promiscuous semantics. One's first naive guess as to how to represent a simple sentence like A boy builds s boat. is as follows: (3z, y)build(z, g) A boy(z) ^ boat(v) This simple approach seems to break down when we encounter the more ditcuit phenomena of natural language, like tense, intensional contexts, and adverbials, as in the sentence A boy wanted to build a boat quickly. These phenomena have led students of language to introduce significant complications in their logical notations for represent- ing sentences. My approach will be to maintain the syntactic simplicity of the logical notation and expand the theory of the world implicit in the semantics to accommodate this simplicity. The representation of the =hove sentence, as is justified below, is (::lCl, ¢Z, el, Z, V) Past(el )AwnnLl(et, Z, ez)Aquiekl(e2, us) Abmld~(es, z, g) A bey(z) A boat(g) That is, el occurred in the peat, where el is z's wanting e~, which is the quickness of us, which is z's building of y, where z is a boy and y is a boat. In brief, the logical form of natural language sentences will be a conjunction of atomic predications in which all variables are existentially quantified with the widest poesible scope. Predi- cates will be identical or nearly identical to natural language morphemes. There will be no ftmctious, fun¢*ionals, nested quantifiers, disjunctions, negations, or modal or inteusional op erators. 3 The Logical Notation Davidson (1967) proposed a treatment of action sentences in which events are treated as individuals. This facilitated the representation of sentences with time and place adverbials. Thus we can view the sentences John ran on Monday. John ran in Sin Fnmciaco. as mmerting the existence of & ruxming event by John and assert- ing a relation between the event and Monday or San Francisco. We can similarly view the sentence John ran slowly. as expressing an attribute about a running event. Treating events as individuals is abe useful beemme they can be acgu- merits of statements about cremes: Because he wanted to get there first, John ran. Because John ran, he arrived sooner than anyone else. They can be the objects of propositional attitudes: Bill was surprised that John ran. Finally, this approach accomodates the facts that events can be nominalized and can be referred to pronominally: John's running tired him out. John ran, and Bill saw it. But virtually every predication that can he made in natural language can be specified u to time and place, be modified adverbially, function a~ a cause or effect of something else, be the object of a propositional attitude, be nominalized, and be referred to by a pronoun. It is therefore convenient to extend Davidson's approach to all predications. That is, corresponding to any predication that can he made in natural lan~tage, we will say there is an event, or state, or condition, or sitl=ation. or "eventuality', or whatever, in the world that it refer~ to. This approach might he called "ontnlogical promiscuity'. 0lie abandons all ontological scruples. Thus we would like to have in our logical notation the possi- bility of an extra argument in e~h predication referring to the "condition" that exists when that predication is true. However. especially for expository convenience, we would like to retain the option of not specifying that extra argument when it is not needed and would only get in our way. Ilence, I propose a logical notation that provides two sets of predicates fhat are ~ystem- atically related, by introducing what might I)e railed a "nomi- nalization" operator '. (:orresponding lu every rl-ary predicate p there will he an n + I-ary predicalc i ~t who.~e (i~t argqlnlenl can he thought of a.~ the condilion that }mhl~ '*hen p is rnw of the suhsequent ar~lments. Thus. if r..(J) me,~ns that .John runs, run'(E, J) means that /': is a running event hy ,John. or John's running, if slipperv(F ) means that floor F is slippery, then Jlipperv~(E, F) means that ~" is the condition of F's being slippery, or F's slipperiness. The effect of this notational ma- neuver is to provide handles by which various predications can be grasped by higher predications. A similar approach haL~ been in many AI systems. In discourse one not only makes predications about such ephe- mera as events, states and conditions. One also refers to crttities that do not actually exist. Our notation must thus have a way of referring to such entities. We therefore take our model to he a Platonic universe which contains everything that can he spoken of - objects, events, states, conditions - whether they exist in the real world or not. It then may or may not be a property of such entities that they exist in the real world. In the sentence ( l ) John worships Zeus, the worshipping event and John, but not Zeus, exist in the real world, but all three exist in the (overpopulated) Platonic uni- veto. Similarly, in John wants to fly. 62 John's flying exists in the Platonic universe but not in the real world.l" The logical notation then is just first-order predicate calculus, where the universe of discourse is a rich set of individuals, which are real, possible auad even impossible objects, events, conditions, eventualities, and so on. Existence and truth in the actual universe are treated as pred- ications about individuals in the Platonic universe. For this pur- pose, we use a predicate Ezist. The formula Ezist(JOllN) says that the individual in the Platonic universe denoted by JOHN exists in the actual universe, s The formula (2) Ezist{g) ^ run'(E, JOHN) says that the condition E of John's r~mning exists in the ac- tual universe, or more simply that "John rains" is true, or still more simply, that John runs. A shorter way to write it is run( JO lf N). Although for a simple sentence like "John rmls ~, a logical form like (2) seems a bit overblown, when we rome to real sentences in English discourse with their variety of tenses, modalities and adverbial modifiers, the more elaborated logical form is neces- sary. Adopting the notation of (2) has Hw eth,ct of splitting a sentence into its propositional content - run'(L', JOHN) and its assertional claim - gzist(E). This frequently turns out to be useful, as the latter is often in doubt until substantial work has been done by discourse interpretation processes. An entire sentence may be embedded within aa indirect proof or other extended counts{factual. We are now in a position to state formally the systematic re- lation between the unprimed and primed prrtlicat~ as an axiom schema. For every n-sty predicate p, (Vet ..... z,i)p( zl ..... z,i) ~ (3e) Ezi,,t(e)Ap'(e, zt ..... z,i) That is, if p is true of zl ..... z,s, then there is a condition e of p's being true of zt, ..., z~, amd ~ exists. Conversely, (re, zl ..... z,,)gzist(e) A p'(e, z, ..... z,,) ~ p(z, ..... z,,) Thai is. if • is the condition of p's being tnle of zt ..... Jr,,, and e exists, then p is true of =,,..., z,,. We can compress these axiom schemas into one formula: {'31 (Vii ..... Zei)p(,,Z'l ..... Z,l) --= (3elgzist(e)A p'(e.,z I ..... z,,i) A sentence in English asserts the existence of one or more eventualities in the real world, and this may or may not imply the existence of other individuals. The logical form of sentence (I) is Ezistl E) A morshipt( E, JOHN, ZEUS) This implies ~'zist(JOHN) but not Ezist(Zbft;,b')..~imilarly, the logical form of "John wants to fly" is IOns need not adhere to Platonism to accept the Platonic universe. It ran be viewed a~ t socially constituted, or conventional, con.true:ion, which is never~hele~ highly constrained by :he way the (not directly accessible} material world is. The degree of constraint is variable. We are more constrained by the miteriaJ world to belie~ in trees and chairs, le~ so to believe in patriotism or ghosts. iThe re~der might chaos# to think ot" the Platonic universe u the univenm of pmmibln individuals, although 1 do not want to exclude Io~eallll im- possible individua/s, such •- the condition John helio~ to exist when he believe; 6 + 7 -- 15. IM¢Cal~hy (1997) employs a simtlar technique. E=ist(E:) ^ wand(E:, JOHN, El) A fly'(E~, JOHN) This implies Ezist{JOHN) but not Ezist(EI). When the ex- istence of the condition corresponding to some predication im- plies the existence of one of the arguments of the predication, we will say that the predicate is transparent in that argument, and opaque otherwise, i Thus, worship and want are transparent in their first arguments and opaque in their secottd arguments. In general if a predicate p is transparent in its nth argument z, this can be encoded by the axiom (re ..... =, ...)p'(e ..... =, ...) ^ Ezi~t(e) ~ Ezist(z) s That is, if e is p's being true of z and e exists, then z exists. Equivalently, (V..., x, ...)p( .... z, ...) 3 E.'zist(.~) In the absence of such axioms, predicates are a.ssltmed to be opaqne. The following sentellce illustrates the exleHt Ii) ~'hich we must have a way of eel)resenting existent and llOlle',~i',l~'tlt ',i;iles and events ill ordinary discourse. ('l) "rhe government has repealedly refused to deny Ihat Prime Minister Margaret Thatcher vetoed the ( :hannel Tutmel at her summit meeting with President Mitterand on 18 May, as Ne~s Scientist revealed last week. ~ In addition to tlw ordinary individuals Margaret Thatcher anti President Mitterand anti the corporals entity ,Ve., ,b'ezenliM. there are the int,.coals of time IX May and "la:-,i' week', the a.s yet llOlleNi',ll'nt Chilly. l.he ( 'hannrl "l',miwl, an in,Ii+idlial reveal- ing ew'llt and the complex cw.nt ,~f lhc ,~Jllnli{il meeting, which actually oecllrred, a set of real refu.~als (listrihuled acr{)~s time in a l)articular way, a denial event whieil did not occur, and a vetoing event whh'h may or may {lot have occurred. Let us lake P,ist{/fs) to mean that Ea existed in the pant and Perfect{E,) to mean what the peril'el lense means, m*l~hly. that /re existed in Ihe pa.st and may sol .Vcl be c.mph.ted. The representation of just Ihe verb, nomin;tlizali.ns, adw.rhials and tenses of senienee ('11 is x4 fiAlow~: I' er feet( F:; ) A repe,tte,ll I'.'l ) A r," f lt.4e'( I.'l , ( ;( ) l"l'. 1";:) A den!l°( I'::, (;()UT. Ha) A .rio'( I'.'a, AI7". ('7") A at'(E.. E~. ;';..) A racer'(If;. ,~.17". l'3f) Anti{ F.'s. 18AI -I I" )A Past( b;~ )Area,col'( Ira. , v.~,', E.~) A last- e,eck( bfa ) Of lhe vario.s enliti{-~ real'reed In. Slit" 4cnleliee. via .sprained predicate4, a.sseris lhe t.xisilonel , of a lypir;tl reflisal ['it ill ,1. "~el of reilisals and Ihe rt.vrlaiion /'.',~. 'l'hl. r\i-i,.nc,. ,,f lit,, rq,flisal implies the exi.~lclieC {,f Ihe ~ovi'rilllll'lll h ,t,>,'- il,,i illil;l~ the eXislenee {~f the dcllial; quile Ihe ,llllll,~li,' h iii;i)' ~llt.*.¢t,-I {hi' egi~i.ellel. +if the vein. |lut ccrlainl) d{.'., lllll imply il. TI., r~'~ela- lion /fa, liowever, implies the existence of both the Nero Scientist 4Mere properly, we shnuld say ",'sist~ntially transparent" ~n,t "e×lsten. tinily opaque', since this notion does not coincide exactly with re/'eremtia/ /renSl~lrenci,. SQuantification in this notation in always ow-r entili,.s in the Platonic uni- verse. F, xistenee in the reid world is ,'apress~.d by predicate.s, in particular the predicate gzisi. s'rhis sentence is taken from the Nea, Scientist, June 3. 1962 {p. 6321. [ am indebted to Paul Martin fur calling it to lily &ttentl,~n, 83 NS and the at relation E4, which in turn implies the existence of the veto and the meeting. These then imply the existence of Margaret Thatcher MT and President Mitterand PM, but not the Channel Tunnel CT. Of course, we know about the exis- tence of some of these entities, such ms Margaret Thatcher and President Mitterand, for reasons other than the transparency of predicates. Sentence (4) shows that virtually anything can be embedded in a higher predication. This is the reason, in the logical nots- tins, for flattening everything into predications about individu- Ms. There are four serious problems that must be dealt with if this approach is to work - quantifiers, opaque adverbials, the distinction between de re and de ditto readings of belief reports, and the problem of identity in intensional contexts. I have described a solution to the quantifier problem else- where (Hobbs, 1983). Briefly, universally quantified variables are reified ms typical elements of sets, existential quantification inside the scope of universally quantified variables are handled by means of dependency functions, and the quantifier structure of sentences is encoded in indices on predicates. In this paper i will address only the other three problems in detail. 3 Opaque Adverbials [t seems reasonably natural to treat transparent adverbials as properties of events. For opaque adverbials, like "almost", it seems lees natural, and one is inclined to follow Reichenbach (1947} in treating them ms ftmctionais mapping predicates into predicates. Thus, John is almost a man. would he represented almo,t( man )( J ) That is, almos~ maps the predicate man into the predicate "al- most a man', which is then applied to John. This representation is undesirable for our purposes since it is not first-order. It would be preferable to treat opaque operators as we do transparent ones, ms properties of events or conditions. The sentence would be represented almost(E) A manl( E, J) But does this get as into dil~cuity? First note that this representation does not imply that John is a man, for we have not asserted g's existence in the real world, and almo,t is opaque and does not imply its argument's existence. But is there enough information in E to allow one to determine the truth value of aimomt(E) in isolation; without appeal to other facts? The answer is that there could he. We can construct a model i~ which for every functional F there is a corresponding equivalent predicate q, such that (vp, ~(F(p)(z) -- (-3s)q(~) ^ p'(e, :)) The existence of the model shows that this condition is not nec- essarily contradictory. Let the ,miverse of discourse D be the class of finite sets built out of a finite set of urelements. The interpretation of a constant X will be some element of D; call it I(X). The interpretation of s monsdic predicate p will a subset of D; call it lip). Then if E is such that p'(E, X), we define the interpretation of E to be < l(p), [(X) >. Now suppose we have a functional F mapping predicates into predicates. We can define the corresponding predicate q to be such that q(E) is true iff there are a predicate p and a constant X where the interpretation of E is < I(p), [(X) > and F(p)(X) is true. The fact that we can define such a predicate q in a moderately rich model means that we are licensed to treat opaque adverbials as properties of events and conditions. The purpose of this exercise is only to show the viability of the approach. I am not claiming that a running event *8 an ordered pair of the runner and the .set of all runners, although it should he harmless enough for those irredeemably committed to set-theoretic semantics to view it like that. It should be noted that this treatment of adverbials has con- sequences for the individuating criteria on eventualities. We can say "John is almost a man ~ without wishing to imply "John is almost a mammal," so we would not want to say that John's be- ing a man is the same condition as his being a mammal. We are forced, though not unwillingly, into a position of individuating eventualities ,,,'cording to very fine-grained criteria. 4 De Re and De Dicto Belief Reports The next problem concerns the distinction (due to Quine (19.56)) between de re and de ditto belief reports. A belief report like (5) John believes a man at the next table is a spy. has two interpretations. The de dieto interpretation is likely in the circumstmace in which John and some man are at adjacent tables and John observes suspicious behavior. The de re inter. pretation is likely if some man is sitting at the table next to the speaker of the sentence, and John is nowhere around but knows the man otherwise and suspects him to be a spy. A sentence that very nearly forces the de re reading is John believes Bill's mistress is Bill's wife/ whereas the sentence John believes Russian consulate employees are spies. strongly indicates a de ditto reading. In the tie re reading of (5), John is not necessarily taken to know that the man is in fact at the next table, but he is normally a.ssumed to be able to identify the man somehow. More on ~identil'y" below. In the de divan reading John believes there is a man who is both at the next table and t spy, but may be otherwise unable to identify the man. The de re reading of (5) is usually taken to support the inference (6) There is someone John believes to be a spy. whereas the de ditto reading supports the weaker inference (7) John believes that someone is a spy. YThi- ,~x"~mple is due to Moore and Hendrix (1982). 64 As Quine has pointed out, as usually interpreted, the first of these sentences is false for most of us, the second one true. A common notational maneuver (though one that Quine rejects) is to represent this distinction as a scope ambigafity. Sentence (6) is encoded as (8) and (7) as (9): (8) (~z)believe(J, spy(z)) (9) believe(J, (3z)spy(z)) If one adopts this notation and stipulates what the expressions mean, then there are certainly distinct ways of representing the two sentences. But the interpretation of the two expressions is not obvious. It is not obvious for example that (8) could not cover the case where there is an individual such that John believes him to be a spy but has never seen him and knows absolutely nothing else about him - not his name, nor his ap- pearance, nor his location at any point in time - beyond the fact that he is a spy. In fact. the notation we propose takes (8) to be the most neutral representation. Since quantification is over entities in the Platonic universe, (8) says that there is some entity in the Platonic universe such that John believes of that. entity that it is a spy. Expression (8) commits us to no other beliefs on the part of .John. When understood in this way, expression (8) is a representation of what is conveyed in a de ditto belief report. Translated into the flat notation and introducing a constant for the existentially quantified variable. (8) becomes (10) believe{J. P) A spy'(P.S) Anything else that John believes about this entity must be stated explicitly. In particular, the de dieto reading of (5) would be represented by something like (11) believe(J, P) A spy'(P, S) A believe( J, Q) A at'(Q, S, T) where T is the next table. That is, John believes that S is a spy and that .q is at the next table. John may know many other propcriies about S and still fall short of knowing ,rho the spy is. There is a range of possibilities for John's knowledge, from the bare statements of (lO) and (It) that correspond to a ,le ditto reading to the full-blown knowledge of S's hh'ntity that is normally present in a de re reading. In fact, an FBI agent would progress through just such a range of belief states on his way to identifying the spy. To state John's knowledge of S's identity properly, we wo*tld have to state explicitly John's belief in a potentially very large collection of properties of the spy. To arrive at a succinct way of representing knowledge of identity in our notation, let us con. sider the two pairs of equivalent sentences: What is that? Identify that. The FBI doesn't know who the spy is. The FBI doesn't know the spy's identity. The answer to the question "Who are you?" and what is re- quired before we can say that we know who someone is or that we know their identity is a highly context-dependent matter. Several years ago, before I had ever seen Kripke, if someone had asked me whether I knew who Saul Kripke was, I would have said, ~Yes. tle's the author of Naming and Neeessd~. ~ Then once ! was at a workshop which I knew was being attended by Kripke, but I didn't yet know what he looked like. If someone had asked me whether I knew who Kripke was, I would have had to say, "No. * The relevant property in that context, was not, his authorship of some paper, but any property that distinguished him from the others present, such as "the malt in the back row holding a cup of coffee*. Knowledge of a person's identity is then a matter of know- ing some context-dependent essential property that serves to identify that person for present purposes - that is, a matter of knowing who he or she is. Therefore, we need a kind of place-holder predicate to stand for this essential property, that in any particular context can be specified more precisely. It happens that English has a mor- pheme that serves just this function - the morplwme "wh" Let us then posit a predicate u,h that stands for the contextually ,te- termined property or conjunction of properfes that wotild coiult as an identification in that particular context. The de re reading of (5) is generally taken to include John's knowledge of the identity of the alh'd~cd spy. Assuming this, a de re belief report would be represented a.s a conjunction of two beliefs, one for the main predication and the other express- ing knowledge of the es~,,ntial properly. Ihe what-oess, of the arg~sment of the predication. believe{J. 1)) A spv'(l'. X) A kno.,( I. c~) A u,h'(~.~, X) That is. John believes .~,' is a ~py and .Iohn kn.w'~ who .~,' i- Ilowever. let us probe this ,li~Iinct"m j~lsI a lit th. more deeply and in particular call into qtlt,~,!loll whether knowh'd~e of iden- tity is really part of the meanmg of the sentence in the de re reading. The representation of the de ditto reading of 3. [ have said. is (12) believe(J, P) A spy'(P, S) A behei,e(J.Q) A ,it'(Q, S,T) Let its represent the de re rea(lin~ a.,~ { 13a) believe( ./. l'} A .'I'Y'( l'. ,'; ) A /.'st ~t( C~ ) A ,H'( t~..~'. 7') (131)) A kt, ou,( J. I:1A u.h'( It'..',') What is common to(121 and (l::) arc flit. crltijiinci,, hel:,~','( /. P). spy'(/'. S) and at'(Q..s'. 7"). "['hcre is a !.;viiuiiu. ainhi!.,.uii.v a..~ to whelher Q exists in the real world (de re I (Ir i~ mcrely Iwlieved by John (de dicto), lu addition. (I::) incl.de, the conjuncts tnolt,(J. R) and ,vh'(/L.s') - lint. (i:>>i~i. t'~llt are these necessarily part of the ,le re illfl,rllrelalh,ii ~'Jf sentence 5? Th, followin~ t.xanillle cast', d(.ihl .. thi. S.i)l)~,s~, the entire ffotary ('.hill i~ seall.d ;ll ilia. l:ihh, ill,\i I,i llw -p~'al.ct of ~i. I;ilt John'doesn'i kllOl ihi-. ,h)hli t..Ih.~ v- Ih;ll -,,lili, llit, lil- her of Ihe Rolary f'hih is ~i -ll). hilt ha- ll~ I,l~'a which one .Sefllence 5 describes tflis ~.ilUail~ln. ;lli~l i)iily I I:;al h.ld~, not (13hi and not (12). Jlult'ment,; are sonil'iiml"~ linci'rlaill ~K4 to whether sentence 5 is appropriatc in these circllms/ances, but it is certain that the sentence John believes someone at the next table is a spy. is appropriate, and that is sufficient for the argument. It seems then that the toni,nets know(J. R) and ~,h'(R.S) are not part of "#hat we want in the initial logical form of the sentence, s but only a very common conversational impli- cature. The reason the implicature is very. common is that if iAnother way of putting it: they are not part. of the literal meaning of the sentenc;e. 85 John doesn't know that the man is at the next table, there must be some other description under which John is familiar with the man. The story I just told provides such a description, but not one sufficient for identifying the man. This analysis is attractive since it allows us to view the de re - de dicto distinction problem u just one instance of a much more general problem, namely, the existential status of the grammat- ically subordinated material in sentences. Generally, such ma- terial takes on the tense of the sentence. Thus, in The boy built the boat. a building event by z of y takes place in the past, and we assume that a was a boy in the past, at the time of the building. But in Many rich men studied computer science in college. the most natural reading is not that the men were rich when they were studying computer science but that they are rich now. In The flower is artificial. there is an entity z which is described as a flower, and z exists, but its "flower-hess" does not exist in the real world. Rather, it is a condition which is embedded in the opaque predicate "artificial'. It was stated above that the representation (10) for the de ditto reading conveys no properties of S other than that John believes him to be a spy. In particular, it does not convey S's existence in the real world. S thus refers to a possible individual, who may turn out to be ,wtual if, for example, John ever comes to be able to identify the person whom he believes to be the spy, or if there is some actual spy who has given John good cause for his suspicions. However, S may not be actual, only possible. Suppose this is the case. One common objection to possible individuals is that they may seem to violate the Law of the Excluded Middle. Is S married or not married? Our intuition is that the question is inappropriate, and indeed the answer given in our formalism has this flavor. By axiom (3), married(S) is really just an ab- breviation for married'( g, S) ^ gzist(E). This is false, for the existence of E in the real world would imply the existence of S. So married(S) is also false. But its falsity has nothing to do with S's marital status, only his existential status. The predi- cation unmarried(S) is false for the same reason. The primed predicates are basic, and for them the problem of the excluded middle does not arise. The predication maeried'(E, S) is true or false depending on whether E is the condition of S's being married. An unprimed, trmxsparent predicate carries along with it the existence of its arguments, and it can fail to be true of an entity either through the entity being actual but not having that property or through the nonexistence of the entity. 5 Identity in Belief Contexts The final problem I will consider arises in de dieto belief reports. It is the problem of identity in intensional contexts, raised by grege (1892). One way of stating the problem is this. Why is it that if (14) John believes the Evening Star is rising. and if the Evening Star is identical to the Morning Star, it is not necessarily true that (15) John believes the Morning Star is rising. By Leibniz's Law, we ought to be able to substitute for an entity any entity that is identical to it. This puzzle survives translation into the logical notation, if John knows of the existence of the Morning Star and if proper names are unique. The representation for (the de dicto reading of) sentence (14) is (16) believe(J, P, ) A rise:( FI, ES) A believe( J, Q t) AEveninpStar:(QI, ES) John's belief in the Morning Star would he represented believe(J, Q2) A Morning.Star:(Q2, MS) The existence of the Evening Star and the Moromg Star is ex- pressed by Ezist(Qi) ^ Ezist(Q2) The uniqueness of the proper name "Evening Star" is expressed by the axiom (Vz, y)Evenin§-Star(z) A Evensn§-Star(y) D .z = y The identity of the Evening Star and the Morning Star is ex- pressed (V~)Eoening-,~;lar(~) ---- Aforning-b'tar(z) From all of this we can infer that the Morning Star M,q is also an Evening Star and hence is identical to ES;, and hence can be substituted into ri.se'(Pi, E.S') to give rise'(PI, MS). Then we have believe( J, P, ) A vine'( P,, M S ) A believe( J, Q: ) AMorning-b'tar'(Q:, MS) This is a representation for tile paradoxical sentence (15). There are three possibilities for dealing with this proi)lem. The first is to discard or restrict I,eibniz's Law. The second is to deny that the Evening ~tar and the Morning Star are identical a.s entities in the Platonic universe; they only happen to he identical in the real world, and that is not sullieient for intersubstitutivity The third is to deny that expression (16) represents ~entence (14) because "the Evening Star" in (14) does not refer to what it seems to refer to. The first possibility is the approach of researchers who treat belief as an operator rather than as a predicate, and then re. strict substitution inside the operator. ~ We cannot avail our- selves of this. solution bec.ause of the flatness of our notation. The predicate rtse is surely referentially transparent, so if ES and MS are identical, M,S" can he substituted for E:S in the expression rine'(l'l,Eb') to give rtse'(l'].M.S'). Then the ex- pression belier,e( J, I'1) wouhl not even require substitution to he a belief about the Morning Star. In any case, this approach does not seem wise in view of the central importance played ia discourse interpretation by the identity of differently presented entities, i.e. by coreference. Free intersubstitutibility of identicals seems a desirable property to preser'se. The second possible answer to Frege's problem is to say that in the Platonic universe, the Morning Star and the Evening Star *This ia a purely syntactic approach, and whPn one tries to construct • semantics for it, one is generally driven to the third possibility. 86 are different entities. It just happens chu in the res/world they are idemical. But it is not true that E$ = MS, for equality, like quantification, is over entities in the Platonic universe. The fact that E,.~ and MS ate identical in the real wodd (call this relation rw-identicai) must be stated explicitly, say, by the expression r~-identical( E S, MS) or more properly, (~:, ~t)Moming-Star(:) A Euenin~.Stav(y) D r~.4dcntical(z, If) For reuonin~ shout "r~-idmtical" entities, thm~ is, Platonic entities th~ mrs identical in the real world, we may cake the fol- Iowin~ approach- Substitution in re(erenmdly trsmsparent con- texts wonld be ,z:hieved by ~so o( the sx/om schema (17) (Vel, es. e4 .... )p/(et ..... ¢s .... ) A rw.idsnticed(e4, eS) D (::leZ)p~(ez ..... e4, ...) A r~n.4dsnfica~(ez, e! ) where es is the /cth argument of p sad p is referentially cras~, parent in im kth ar~ment. That is, if et is p's being true of {S ~ e$ ~ e4 SA'~ identical in the real world, then there is a condition ¢z o(p's bein~ true of e4, ~ ez is identical to e~ in the real worid. Substitution o/' h'w.identicab" in s condition resulra not in the same condition but in ,n "rw-identical" condition. Them would be such an sx/om for the ~¢ u.gument o( bei*eve but not for its referentially opaque second srlrumeut. A.z/ome will express the fact that r,~.idzntiea~ is an equlvs. lence relation: (~z)r~u-idsnticat( z, ~t) (~=, v )~w.identieal( =, V) D e~.4dentie~(v, z) ('V=, ~, s)r~.4denticai( z, ~) A re.identical(V, s) m ,.-id,,,tie=/(z, ,) Finally, cl~ followins Lziom, co.her with Lziom (17), wou/d exprem L~ibnis's L,w: (Ve~, e~)r,,,-identica(s,, q) ~ (~,t(s,) s ~=ist(s~)) From all of (hi, we can prove that if the gVenin~ Star then the Momin~ Star rises, but we clmsot prove from John's belief chat the Evening Star rim that John believes the Morning Star rises. If John knows the Mornln¢ Star sad the Evening Star are identical, sad he knows ,xiom (17), then his belief that eke Moruin¢ $~m' rim can be proved u one would prove belief in the consequences of ~y o*h~ syilot~m whose premises he believed, in accordance with • m.*s~ment of resmmn¢ shout belief developed in * Iont~.,r vere/on o( th/s pal.ee. This solution is in the spirit of our whole representational ~p. preach in ch*~ it forces tm co be paln(ully expticit about every. chm~. The notation does no magic for us. There is a sit, nificant cost a~.socis:ed with th~s solution, however. When proper names • re represented u predicates sad not u constants, the natural way co state the uniqueness o( proper names is by mesas o( axioms of the foiiowin¢ sort: (~=, y)Euen,ng*~tar(z) A "~uensng-Star(g) D Z I y BUt since from ~he sX/oms for r~-identieai we can show chat "~veninf-~tar(~fS), it would follow chsc M~ = ~S. We mnst thus restate the axiom for [he umqueness o( proper uames a= (V=, y)Evenin~.Star(=) ^ Eveninf-Star(v) 3 r~.ident,cal(z, ~) A similar modification mus, be made for functions. Since we are using only predicates, the uniqueness of the value of a function must be encoded with an axiom like (¥=, V, :)father(=, z) ^ father{v, z) ~ = = y If = and y are both fathem o/" z, {hen z and y are the same. This wonld have to be replaced by the axiom (V=, y, z ) father( =, :)^father(y, z) 3 rw-identicai( =, V) The very common problems involving ressomn K shout equality, which can be done elRciently, are thus translated into problems involvinf resmnm~ shout the predicate re.identical, which is very cumbersome. One way to view Ch/s second solution is ~ a l~x co the first so- lucian. For "=*" we substitute the relation r~;-qden~,cad, ~md by means of axiom schema (17), we force substitutions co propagate to the eventualities they occur in, and we force the distinction between referentially transparent and referentially opaque predi- cates to be made explicitly. It is thus an indirect way of rejecting L,eibnis' Law. The third solution is to say that "the Eveninf Star* in sen- tents (14) does not really refer to the Evening Star, but co some abstract entity somehow related to the Evenin~ Star. That is. sentence (14) is re-fly en example of metonymy. This may seem counterintuitive, sad even bizarre, at first blush. But in fact the most widely *,'espied clmmical solutions to the problem of identiw ate of thn, flavor. Foe Fre~e (1892) "the Evening Star ~ in sentence (14) does no* refer co the Evenin~ Star but co the tenme of the phrsac "the Evening Star ~. [n a more recent ap- proar.h, Zalta (1983) ts~es such noun phrases co refer co "ab- strict objects" related to the resJ object. In both approaches noun phrues in intemional context~ refer co senses or abstract objects, while other noun phrues refer co actual entities, sad so it is necessary co specify which predicates are intensioa*,l. [n a Manta{avian approach, "the Evening Star" would be taken to refer co the inter.on o( the Evening Star, not its e=te~*on in the real world, sad noun phrases would al,vays be taken co refer co intensions, -Ithough for nonintensional predicates there would be mesmng postulates chat make this equivalent co reference co extensions. Thus, in all these approaches intentional and extensional pred- icates must be distintmished explicitly, sad noun phrs~s in in- tensional contexts are systematically interpreted metonymically. It would be em,y enouch in our framework co implement these q3proaches. We c,m define a function a o( three arguments - the actual entity, the co,niter, sad the condition used co describe the entity - chat returns the sense, or intention, or abstract entity, corresponding co the ~ctual entity for chat ¢ognizer ~nd that condition. Sentence (14) would be represented, not ~ (16). but u (18) betievse(d, Pt) ^ rise~(Pt,a(E S, d, Ql)) ^ beiieve(J, Qt) AEusninf.Sta¢(Qc, F,S) l tend r.o prefer co cl~nk o( the vaJue o( a(ES, J, Qt ) as sa abstract entity. Whatever it is, it is necessary chat the vMue of a(E2, J, Qs) be something different from the value of a( ES, J. Q~.) where Movninq-StarJ(Q:, ES). That is. different ~tr'act objects must correspond co the condition QI of being the Evening Star and the coaditioo Q: o( being the Morning Star. It is because o( this feature ~hat we escape the problem 67 o( intenmbstitutivity of identieakt, fur substitution o( MS for ES in (18) yields "...Ariee/(Pt,a(MS, J, Q1))A..." rather than "...Ariee~(Pl,,~(M$, J,Q=)) A...', which would be the represen- tation of sentence (15). The dif~culty with this approach is that it makes the interpee- ration o/" noun phrases dependent on their embedding context: [ntensionai context -* me¢onymlc interpretation Extensional context -- noumetonymic interpretation It thus violates, though not soriousiy, the nmve com~tionaiity that [ have been at so many pehm to preserve. Metonymy is a very common phenomenon in discourse, but l prefer to think o( it as occurring irregularly, sad not 8a siKnalled systematieafly by other elemenu, in the sentence. Having laid out the three possible solutious and their sho~- ¢ominKs, [ find tha~ [ would like to avoid the problem o/" identity altogether. The third sppro.-'h suggests a ruse for doing so. We can amume tha~, in general, (16) is the representation of sen- tence (14). We invoke no extra complications where we don't have to. When, in interpreting the text, we encounter a dif- ficulty resulting from the problem o/' identity, we can go back and revise our in~rprocatmn o((14), by mmuming the reference rmmt have been a metonymie one to the sbstr-,'t entity and not to the actual entity. In theee cm it would be ts if we m'e say- ing, "John couldn't believe about the Evening Star itself that it is rising. The par'edox shows that he is insufficiently acquamted with the Evening Star to refer to it ~metly. He must be talking about an abetr~t entity rotated to the Rvenmg Star." My ~less is the, we will not have to resort to thin run often, for [ suapect the problem rarely srmes in acmad dim:ouume interpre~ion. 6 The tLole of Semantics Let me cla~ by making some commenm about ways of doing semantics. Semangcs is the =temp~.d specification of the re- In, ion between language and ¢he world. However, this requires a theory of the world. There is a ,peetrtun of choices one can make in this retard. At one end o/' the spectrum - l~'s say the right end - one can *,/opt the "coreeet" theory of the wodd, the theory Oven by quantum mechsmcs u~/ the other sciences. If or=. doe= this, .emantics become= impmmbte because it is no lem than Ill of sr../e~m, a fset that has led Fodur (1980) to exp~ some deapmr. Thor's mo much o( a m/smasch between the way we view the wodd and the way che wodd reaily is. At ~he left end, oue can mmume a theory o( the w~dd that is isomorphic to the way we caik -hour it. Whmt [ have been doing in this paper, in fact, is an effort to work out the deem ~- in such = theory. In this cue. semantics becomes very neudy trivial Meet activity in ~emmtics today is slightly to toe ,~t of the extreme left end of this spectrum. One makes certam smumptious about the na- ture of the wodd that timely mflt~t 18nKumle, and doesn't make certain other alumptions. Where one h .= fa~ed to m -~,. the neceeac~, aesumpoons, pusaies app~w, tnd semanr~i¢~ becomes an effort to soive those puzzles. Neve~heiess, it fsils to move far enough away from langms~e to re, reseat d~nifieant pt~gre~ cows~t the tight end of the sl~.etrum. The pmition [ advocate is that there is no remmn to make our task mo~ difficult. We wdl have pus~des enough to mlve when we get m diseourae. A,~o~i~t~smnm l bavo profited from distmmions about this work with Chris Menze/, Bob Moore, Start Rosen~hein, and Ed Zaita. This re- search wu suppor~'d by NIH Grant LM03611 from the National Library o/" Medicine, by Grant IST-8209346 from the Na~ionai Science Founds*ion, and by a gift from ~he Systems Develop- went Foundation. References [11 8obrow, Daniel G. aad Terry Winograd, t977. "An Overview o( KRL, A Knowledge Representation Language', Cocmti~ Scie~e, vol. 1, pp. 3-46. [21 Davidson, Donald0 1967. "The Lo~cai Form of Action Sen- tenors', in N. Rescher, ed., The Loqt¢ of Declswn and Actmn, pp. 81-95, University o( Pittsburgh Press. Pittsburgh, Penn- sylvania. [31 Fodor, J. A., 1980. "Methodo|o~icai Solipsism Considered aa a Research Strategy in Cognitive Psychology', The Behauioml and Bmm Sciences, vo|. 3, no. I, March, 1980. [4~ Froge, Goclieb, 1892. "On Sense and Nominatum', in H. Feigei snd W. Seilare, ed., Readinqa ,n Phdo~oph.cal Anatvs.s, pp. 8S-t02, App|e¢on-Century-Cro(t, Inc., New York. [949. [51 Hayes, Patrick J., 1979. "The Loglc o/" Frames" in D. %fe~- zing, eel., Frame Concept,o.ns and Tezt O"nderstandinq, pp. 46-61, Waiter de Gruyter and Compaay. {61 Hendrix, Gary G., 1975. "Extending the Utility of Seman- tic Nemorke Through Partitioning, Adoance P~per~. inter. natlomd Yoint Conference on ,4rtsficml [nletliqence, Tbilisi, Geor~,m SSP.. pp. 115-121, September, 1975. [71 Robbs, Jerry IL, 1983. "An Improper Treatment of Quaatifi- caUon in Ordinary En~/iab', Proceedinqs of the 21Jr Annual Meelmq, Assocmtson for Computatwnat I'.mqusatlcs, pp. 57- 63. Cambridge, Ma.~huset~, June, 1983. [81 McCarthy, John, 1977. "Epistemolo~icaJ Problems of Ar~ifi- ci "'I [ntelligerce=. Proee~inqs, international Joint Conference on Artifi=iat [m~elh'qenee, pp. 1038-1044, Cambridge, Maa- sachusett~, Autqumt, t977. [91 Mixt~7, Mm-vin, 1975. "A Framework for Representing K~:~iedge', in Psmek H. Winston, ed., The Pspehoto~jy of Camputcr Visio~ pp. 211-277, McGraw-Hill. [I0~ Moore, Robert C., 1980. "Reasoning about Knowledge and Action', SRI [mmmaCionai Technical Report [91, October, 1980. [11l Moon, Rohe. C. and G=y G. Hendnx, 1982. "Compu- t~ion~l Modehs o/" I~qie/" and the Semantics o( Belief Sen- fences', in S. Peters and E. S~rinen, eds., Pn~esses, Beliefs, =Ju~ Quee=ma~ pp. 107-127, D. Reidet Publishing Company. [121 Qul|liun, M. P.=m, 19~8. ~emanti¢ Memory", in Marvin Minsky, ed., Sema~ [nfo,~nation Proeessinq, pp. 227-270, MIT P~, Cambeidge, Massschu..qe¢ts. [13[ Quine, Wiilaed V., 19,q3. ~On What There [s', in From [..ogieat PobM of Vie,m, PP. 1-19, H=rvard University Prem, Cambrid~, M,maclnu~cs. 68 [141 Quine, Willard V., 1956. "Quantifiers and Propositional Attitudes', Journal o[ Philomophy, vol. 53. [15] Reichenbach, Hans, 1947. Elementa of Symbolic Logic, The MacMillan Company. [16[ Schmolze, J. G. and R. J. Brachman, 1982. "Summary of the KL-ONE Language*, in Proceedings, 1981 KL.ONE Workshop, pp. 231-257, Fairchild Laboratory for Artificial Intelligence Research, Palo Alto, California. [17] Simmons, Robert F., 1973. "Semantic Networks: Their Computation and Use for Understanding English Sentences', in Roger Schank and Kenneth Colby, eds., Computer Models of Thought and Lancuaqe, pp. 63-113, W. H. Freeman: San Francisco. [181 Zalta, Edward N., 1983. Abstract Objects: An Introduction to .4ziomatie Metaphysics, D. Reidel Publishing Company: Dordrecht, Netherlands. 89
1985
8
REVERSIBLE AUTOMATA AND INDUCTION OF THE ENGLISH AUXILIARY SYSTEM Samuel F. Pilato Robert C. Berwick MIT Artificial Intelligence Laboratory 545 Technology Square Cambridge, MA 02139, USA ABSTRACT In this paper we apply some recent work of Angluin (1982) to the induction of the English auxiliary verb system. In general, the induction of finite automata is computation- ally intractable. However, Angluin shows that restricted finite automata, the It-reversible automata, can be learned by el~cient (polynomial time) algorithms. We present an ex- plicit computer model demonstrating that the English aux- iliary verb system can in fact be learned as a I-reversible automaton, and hence in a computationally feasible amount of time. The entire system can be acquired by looking at only half the possible auxiliary verb sequences, and the pat- tern of generalization seems compatible with what is known about human acquisition of auxiliaries. We conclude that certain linguistic subsystems may well be learnable by in- ductive inference methods of this kind, and suggest an ex- tension to context-free languages. INTRODUCTION Formal inductive inference methods have rarely been ap- plied to actual natural language systems. Linguists gener- ally suppose that languages axe easy to learn because grarn- mars axe highly constrained; no ~gener,d purpose" inductive inference methods are required. This assumption has gener- ally led to fruitful insights on the nature of grammars. Yet it remains to determine whether ~ll of a language is learned in a granHnar-specilic manner. In this paper we show how to successfully apply one computationally emcient inductive inference algorithm to the acquisition of a domain of English sy'nca.x. Our results suggest that particular language subsys- tems can be learned by general induction procedures, given certain general constraints. The problem is that these methods are in general com- pntationally intractablc. Even for regular languages induc- tion can be exponentially diiTicult (Gold, 1978). This sug- gests that there may be general constraints on the design of ce~ain linguistic subsystems to make them easy to learn by general inductive inference methods. We propose the constraint of k-reversibilit V as one such restriction. This constraint guarantees polynomial time inference (Angluin, 1982). In the remainder of this paper, we also show, by an explicit computer model, that the English auxiliary verb system meets this constraint, and so is easily inferred from a corpus. The theory gives one precise characterization of just whcre we may expect general inductive inference methods to be of v,~,lue in language acquisition. LEARNING K-REVERSIBLE LANGUAGES FROM EXAMPLES The question we address is, If a learner presumes that a natural language domain is systematic in some way, can the learner intelligently infer the complete system from only --- subset of sample sentences? Let us develop a:i exaauple to formally describe what we mean by "systematic in some way," and how such a systematic domain allows the infer- ence of a complete system front examples. If you were told that Mar~ bakes cakes, John bakes cakes, and Mar V eat~ pies are legal strings m some language, you might guess that John eats pies is also in that language. Strings in the language seem to follow a recognizable pattern, so you ex- pect other strings that follow the same pattern to be in the language also. In this particular case, you axe presuming that the to- be-learned language is a zero-reversible regular language. Angluin (1982) has defined and explored the formal proper- ties of reversible regular languages. We here translate some of her formal definitions into less technical terms. A regular language is any language that can be generated from a formula called a regular expression. For example the strings mentioned above might have come from the language that the following regular expression generates: (MarylJohu) (bakes6eats) livery* delicious] (cakeslpies)] A complete natural language is too complex to be gen- erated by some concise regular expression, but some simple subsets of a natural language can fit this kind of pattern. To formally define when a regular language is reversible, let us first define a prefix as any substring (possibly zero- 70 Table 1: Example of incremcntal k-reversible inference for several values of k. SEQUENCE OF NEW NEW STRINGS INFERRED: STRINGS PRESENTED k = 0 k = I NONE Mary bakes cakes John bakes cakes Mary eats pies Mary bakes pies Mary bakes NONE John eats pies John bakes pies Mary eats cakes John eats cakes John bakes Mary eats John eats Mary bakes cakes cakes John bakes cakes cakes Mary bakes pies cakes (MarylJohn){bakes!eats) (cakesipies)* NONE NONE NONE Johnbakes pies John bakes k=2 NONE NONE NONE NONE NONE length) that can be found at the very beginning of some legal string in a language, and a suffix as any substring (again, possibly zero-length) that can be found at the very end of some legal string in a language. In our case the strings ~e sequences of words, and the langamge is the set of all legal sentences in our simplified subset of English. Also, in any legal string say that the surtax that immediately follows a prefix is a tail for that prefix. Then a regular language is zero-reverstble if whenever two prefixes in the language have a tail in common, then the two prefixes have all tails in common. In the above example prefixes Mary and John have the tail bakes cakes in common. If we presume that the language these two strings come from is zero-reversible, then Ma~ and John must have all tails in common. In particular, the third string shows that Mary has eats pies as a tail, so John must also have eats pies as a tail. Our current hypothesis after having seen these three strings is that they come not from the three-string language expressed by (Mar~tiJohn) bakes cakes i Mary eats p:es, which is not zero-reversible, but rather from the four-string language (MarytJohn) (bakes cakes ! eats pies), which is zero-reversible. Notice that we have enlarged the corpus just enough to make the language zero-reversible. A regular language is k-reversible, where k is a non- negative integer, if whenever two prefixes whose l~t k tuorda match have a tail in common, then the two prefixes have all tails in common. A higher value of k gives a more conser- vative condition for inference. For example, i/we presume that the aforementioned strings come from a l-reversible language, then instead of presuming that whatever Mary does John does, we would presume only that whatever Mary bakes, John bakes. In this case the third string fails to yield any inference, but if we were later told that Mary bakes pies is in the language, we could infer that John bakes pies is also in the language. Further adding the sentence Mary bakes would allow 1-reversible inference to also induce John bakes, resulting in the seven-string 1-reversible language ex- pressed by ( Maryldohn) bakes Icakesipiesi l Mary eats pies. With these examples zero-reversible inference would have generated ( MarylJohn) ( bakesieats) ( cakesipies)* by now, which overgeneralizes an optional direct object into zero or more direct objects. On the other hand, two- reversible inference would have inferred no additional strings yet. For a particular language we hope to find a k that is small enough to yield some inference but not so small that we overgeneralize and start inferring strings that are in fact not in the true language we are trying to learn. Table 1 summarizes our examples of k-reversible inference. AN INFERENCE ALGORITHM In addition to formally characterizing k-reversible lan. guages, Angluin also developed an algorithm for inferring a k-reversible language from a finite set of positive exam- pies, an well an a method for discovering an appropriate k when negative examples (strings known not to be in the lan- guage) are also presented. She also presented an algorithm for determining, given some k-reversible regular language, a minimal set of examples from which the entire language 7"1 can be induced. We have implemented these procedures on a computer in MACLISP and have applied them to all of the artificial languages in Angluin's paper as well as to all of the natural language examples in this paper. To describe the inference algorithm, we make use of the fact that every regular language can be associated with a corresponding deterministic finite-state automaton (DFA) which accepts or generates exactly that language. Given a sample of strings taken from the full corpus, we first generate a prefix-tree automaton which accepts or gen- erates exactly those strings and no others. We now want to infer additional strings so as to induce a/c-reversible lan- guage, for some chosen /C. Let us say that when accepting a string, the last k symbols encountered before arriving at a state is a ~c-leader of that state. Then to generalize the language, we recursively merge any two states where any of the following is true: *Another state arcs to both states on the same word. (This enforces determinism.) oBoth states have a common k-leader and either -both states are accepting states or -both states arc to a common state on the same word. When none of these conditions obtains any longer, the re- suiting DFA accepts or generates the smallest k-reversible language that includes the original sample of strings. (The term ~reversible" is used because a ~c-reversible DFA is still deterministic with lookahead /C when its sets of initial and final states are swapped and Ml of its arcs are reversed.) This procedure works incrementally. Each new string may be added to the DFA in prefix-tree fashion and the state-merging algorithm repeated. The resulting language induced is independent of the order of presentation of sam- ple strings. If an appropriate /C is not known a pr/o~', but some negative as well as positive examples are presented, then one can try increasing values of k until the induced language contains none of the negative examples. Though the inference algorithm takes a sample and in- duces a/c-reversible language, it is quite helpful to use An- gluin's algorithm for going in the reverse direction: given a k- reversible language we can determine what minimal set of shortest possible examples (a "characteristic" or "covering n sample) will be sufficient for inducing the language. Though the minimal number of examples is of course unique, the set of particular strings in the covering sample is not necesm~rily Imique. INFERENCE OF THE ENGLISH AUXILIARY SYSTEM We have chosen to test the English auxiliary system un- der /c-reversible inference because English verb sequences are highly regular, yet they have some degree of complexity and admit to some exceptions. We represent the English auxiliary system am a corpus of 92 variants of a declarative statement in third person singular. The variants cover all standard legal permutations of tense, aspect, and voice, in- cluding do support and nine models. We simply use the surface forms, which are strings of words with no additional information such as syntactic category or root-by-inflection breakdown. For instance, the present, simple, active ex- ample is Judy glvez bread. One modal, perfective, passive variant is Judy would have been given bread. We have explored the/c-reversible properties of this nat- ,iral language subsystem in two main steps. First we deter- mined for what values of k the corpus is in fact k-reversible. (Given a finite corpus, we could be sure the language is /c-reversible for all /C at or above some value.) To do this we treated the full corpus as a set of sample strings and tried successively larger values of/C until finding one where /c-reversible inference applied to the corpus generates no ad- ditional strings. We could then be sure that any /C of that value or greater could be used to infer an accurate model of the English auxiliary system without overgeneralizing. After finding the range of values of/C to work with, we were interested in determining which, if any, of those values of/C would yield some power to infer the full corpus from a proper subset of examples. To do this we took the DFA which represents the full corpus and computed, for a trial k, a set of samp|e strings that would be minimally sufficient to induce the full corpus. If any such values of k exist, then we can say that, in a nontrivial way, the English auxiliary system is learnable as a k-reversible language from exam- ples. We found that the English auxiliary system can be faith- fully modeled as a/c-reversible regular language for k >_ I. Only zero-reversible inference overgeneralizes the full corpus as well as the active and passive corpora treated as separate languages. For the active corpus, zero-reversible inference groups the forms of do with the other modals. The DFAs for the passive and full corpora also contain loops and thereby generate infinite numbers of illegal variants. F:.gure I compares a correct DFA for the English auxil- iary system with an overgeneralized DFA. Both are shown in a minimized, canonical form. The top, correct, automaton can be generated by either minimizing the prefix tree for the full corpus or by minhnizing the result of/c-reversible infer- ence applied to any sufficiently characteristic set of sample sentences, for any /C _.> 1. One can read off all 92 variants 72 (giveslg,,,ve) (do.,,Idid) ~ give (i, lw"') (huth,,a) Judy ~ ~ ~ ~ 7 ~'~ 4 b~d be J \ (~6",'i-~Igi',,:n) glven give THE ENGLISH AUXILIARY SYSTEM (giv.tgave) fr -'~ Judy _ f (i*!wastha.lhsd) (beeatbeln|) (•do.sQdid Imlylmi|bttmus¢ ~/i,hallt.hou~"d3~ ,h,*.(S't ~ ) (givingtgiven) ~ / ~ve j ZERO-REVERSIBLE OVERGENERALIZATION OF THE ENGLISH AUXILIARY SYSTEM bread Figure I: The top automaton generates the English auxiliary system. Zero-reversible inference merges state 3 with state 2 and merges states 7 and 6 with state 5, resulting in the bottom overgeneralized version. 73 in the language by taking different paths from initial state to final state. The bottom, overgeneralized, automaton is generated by subjecting the top one to zero-reversible infer- euce, Does treating the English auxiliary system as a I-or- more-reversible l,'mguage yield any inferential power? The English auxiliary system as a l-reversible language can in fact be inferred from a cover of only 48 examples out of the 92 variants in the corpus. The active corpus treated separately requires 38 examples out of 46 and the passive corpus requires 28 out of 46. Treating the full corpus as a 2-reversible language requires 76 examples, and a 3 "~- reversible model cannot infer the corpus from any proper subset whatsoever. For l-reversible inference, 45 of the verb sequences of length three or shorter will yield the remaining nine such strings and nonc longer. Verb sequences of length four or five can be divided into two patterns, <modal> have been 9iv(ing,,en) ,'wad ... be, en} bern9 given. Adding any one (length-four) string from the first pattern will yield the re- maining 17 strings of that pattern. Further adding two length-four strings from the awkward second pattern will yield the remaining 18 strings of that pattern, nine of which are of length five. This completes the corpus. DISCUSSION The auxiliary system has often been regarded ,as an acid test for a theory of langulage acquisition. Given this, we are encouraged that it is in fact learnable via a computationally eII.icient general method. It is significant that at [east in this domain we have found a k (of l) that is low enough to generate a good amount of inference from examples yet high enough to avoid overgeneralization. Even more conservative 2-reversibility generates a little inference. This inductive power derives from the systematic se- quential structure of the English auxiliary system. In an idealized form (ignoring tense and inflections) the regular expression [DO I [<modal>] [HAVE] [nEll [BEpassive] GIVE generates all English verb sequence patterns in our corpus. Zero-reversible inference basically attempts to simplify any partial, disjunctive permutation like (a'.b)z:ay into an exhaustive, combinatorial permutation like (ab)(z',y). Since the active corpus (excluding BE.passive from the idealized regular expression) in fact has such a simple form except for the DO disjunction, zero-reversible inference productively completes the three-place permutation but also destroys the disjunction, by overgeneralizing what patterns can follow both DO ,'rod <modal>. One-reversible inference requires that disjuncts share some final word to be mergeable, so that DO cannot merge with any auxiliary triplet, yet the permutation of < modal:, IIA VE by BE; is still productive. Similar considerations obtain in the passive case, as well as for the joint corpus. Table 2 illustrates the trade-off in this case between inferential power and the proper handling of exceptions. In complex environments, rather than reduce the infer- ential power by raising k one could instead embed this al- gorithm within a larger system. For example, a more re- alistic model of processing English verb sequences would have an external, more linguistically motivated mechanism force the separate tre.atment of active versus passive forms. Then if, say on considerations of frequency of occurrence, do exceptions were externally handled and the infrequent Table 2: Incremental k-reversible inference of some English auxiliary verb sequences. SEQUENCE ()F NEW NEW .~TRIN(;S INFERRED: .~TRIN(;S PIIESENTED ilk = 0 ' k = ! k = 2 ¢,mhl giw NONE NONE may give does give , could have given f may have given could have been giving , NONE NONE may have given does have given (ALREADY INFERRED) may have been giving does have been giving NONE NONE NONE NONE may have been giving NONE NONE NONE NONE NONE NONE 74 ... BE being ... cases were similarly excluded from the im- mature learner, then one could apply the more powerful zero-reversible inference to the remaining active and passive forms without overgeneralizing. In such a case the active system can be induced from 18 examples out of 44 variants and the passive system from 14 out of 22. The entire active system is learnable once examples of each form of each verb and each modal have been seen, plus one example to fix the relative order of have vs. be, and one example each to fix the order of modal vs. have or be. Though a more complex model must ultimately repre- sent a domain like the English auxiliary system, the way k-reversible inference in itself handles a complex territory satisfies some condition~ of psychological fidelity. Especially .'-cro-reversibility is a rather simple form of generalization of sequential patterns with which we believe humans read- ily identify. In general the longer, more complex cases can be inferred from simpler cases. Also, there is a reasonable degree of play in the composition of the covering saanple, and the order of presentation does not affect the language learned. Children evidently never make mistakes on the relative order of auxiliaries, which is consistent with the reversibility model, but they do mistakenly combine do with tensed verb forms (Pinker, 1984). Given that the appearance of do in declarative sentences is also fairly rare, one might prefer the aforementioned zero-reversible system that handles do support as an exception, rather than opt for a 1-reversible inference which is flawless but a slower learner. The ... BE being ... cases are systematically related to the rest, but also have a natural boundary: 1-reversible inference from simpler cases doesn't intrude into that ter- ritory, yet only a few such examples allow one to infer the remainder. Very. rare sequences like could have been be- ing given will be successfully acquired even if they axe not seen. This seems consistent with human judgments that such phrasing is awkward but apparently legal. k-Reversibility is essentially a model of simplicity, not of complexity. As such, it induces not linguistic structure but the substitution classes that linguistic structures typically work with, building these by analogy from examples. In the linguistic structure for which k-reversibility is defined regular ~ammars ~ it functions to induce the closes that fill "slots" in a regular expression, based on the similarity of tail sets. Increasing the value of k is a way of requiring a higher degree of similarity before calling a match. (See Gonzalez and Thomason, 1978, for other approaches to k- tail inference that are not so efficient.) The same principle can apply to the induction of substi- tution classes in other linguistic domains including morpho- logical, syntactic, and semantic systems. For a particularly direct example, consider the right-hand sides of context-free rewrite rules. Any subset of such rules having the same left- hand side constitutes a regular language over the set of ter- minal a~d nonterminal symbols, and is therefore a candidate for induction. One might thus infer new rewrite rules from the pattern of existing ones, thereby not only concluding that words are members of certain simple syntactic classes, but also simplifying a disjunctive set of rules into a more concise set that exhibits systematic properties. Berwick's Lparsi/al system (1982) is ,an example of this kind of exten- sion. We believe that k-reversibility illustrates a psycholog- ically plausible pattern induction process for natural lan- guage learning that in its simplest form has an efficient computational algorithm associated with it. The basic prin- ciple behind k-reversible inference shows some promise ,'~ a flexible tool within more complex models of language ac- quisition. It is encouraging that, at lea.st in a simple case, computational linguistic models can suggest formal leaxn- ability constraints that ;tre natural enough to be useful in the le,'trning ,f human languages. ACKNOWLEDGMENTS This paper describes research done at the Artificial Intel- ligence Laboratory of the .Massachusetts Institute of Tech- nology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under the Office of NavM Research Contract .N0001.t-80-C-0505. REFERENCES Angluin, D., "Inference of reversible laugalages," Journal of the A.~sociation /or Computing Machinery, 29(3), 741- 765, 1982. Berwick, R., Locality Principles and the ..lcquisitton o/ Syntactic Knowledge, PhD, MIT Department cf Electrical Engineering ,and Computer Science, 1982. Gold, E., "Complexity of Automaton Identification from Given Data," Information and Control, 37, 1978. Gonzalez, R., ztnd Thmnason, M., Syntactic Pattern Recognition, Reading, MA: Addison-Wesley, 1978. Pinker, S., Language 5earnability and Language Devel- opment, Cambridge: MA: Harvard University Press, 1984. 75
1985
9
TUTORIAL ABSTRACTS Introduction to Computational Linguistics Ralph Grishman, New York University This tutorial provides a general overview of computational linguistics. Topics to be considered include the components of a natural language processing system; syntax analysis (including context-free grammars, augmented context-free grammars, grammatical constraints, and sources of syntactic ambiguity); semantic analysis (including meaning represen- tation, semantic constraints, quantifier analysis); and dis- course analysis (identifying implicit information, establishing text coherence, frames, and scripts). Examples will be drawn from various application areas, including database interface and text analysis. Natural Language Generation Kathleen McKeown, Columbia University In this tutorial, we will begin by identifying the types of deci- sions involved in language generation and how they differ from problems in the interpretation of natural language. Several techniques that have been used for "surface" genera- tion (i.e., determining the syntactic structure and vocabulary of the generated text) will be examined, including grammars, dictionaries, and templates. From there, we will move on to other problems in language generation, including how the system can decide what to say in a given situation and how it can order the information for inclusion in a text. Here we will study the constraints that have been used for these deci- sions in domains such as expert systems, database systems, scene description, and problem solving. We will also look at the interaction between conceptual decisions such as these and decisions in surface generation, considering approaches that propose an integrated solution. Structuring the Lexicon Robert Ingria, BBN Laboratories Incorporated This tutorial will discuss the information that has been stored in the lexicon. It will first deal with the types of in- formation that have typically been placed in lexical entries, detailing what sorts of lexical information is necessary for natural language systems. The format of lexical entries and the relationships between lexical entries will be considered next (as in cases of irregularly inflected forms, such "go", "went", "gone", abbreviations and acronyms, such as "helo" and "helicopter", and derived forms, such as "destroy" and "destruction"). Alternate places for storing information will also be considered (for example, regular morphological infor- mation might be contained in individual lexical er~tries or in the grammar). The tutorial will conclude with the implica- tions of recent work in linguistic theory for the structure of lexicons for computational purposes. Recent Developments In Syntactic Theory and Their Computational Import Anthony S. Kroch, University of Pennsylvania Syntactic frameworks currently under development in linguis- tics take different perspectives on several issues of computa- tional interest. Among these are: (1) the importance of stat- ing linguistic theories in a well-defined and explicit formalism whose mathematical properties are known or investigable; (2) the degree to which the syntatic properties of sentences can be understood independently of their semantic interpretation; and (3) the extent to which empirical and mathematical results on parsing and generation can illuminate linguistic issues. We shall discuss the perspectives on these and related questions held by various current linguistic theories, including generalized phrase structure grammar (GPSG), government binding theory (GB), lexical-functional grammar (LFG), and tree adjoining grammar (TAG). Current Approaches to Natural Language Semantics Graeme Hirst, University of Toronto This tutorial provides a survey of various computational ap- proaches to semantics--the process of determining the mean- ing of a sentence or other utterance. Issues addressed will include definitions of meaning; the differences between lin- guistic theories of semantics and formalisms suitable for com- putational understanding of language; knowledge represen- tations that suitable for representing linguistic meaning; the relationship between semantic processing and syntactic pars- ing; and factors in choosing a semantic formalism for a par- ticular computational application. The approaches to seman- tics that will be discussed will include procedural semantics, conceptual dependency, Montague semantics, and composi- tional and knowledge-based approaches. Machine Translation Sergei Nirenburg, Colgate University This tutorial will address the recent resurgence of interest in machine translation (MT) in the United States, Europe, and Japan. Topics to be discussed include the variety of objec- tives for MT systems; various research and developments methodologies; MT as an application area for theoretical lin- guistics, computational linguistics, and artificial intelligence; environments for MT research; and selected case studies of research projects.
1986
1
PARSING A FREE-WORD ORDER LANGUAGE: WARLPIRI Michael B. Kashket Artificial Intelligence Laboratory Massachusetts Institute of Technology 545 Technology Square, room 823 Cambridge, MA 02139 ABSTRACT Free-word order languages have long posed significant problems for standard parsing algorithms. This paper re- ports on an implemented parser, based on Government- Binding theory (GB) (Chomsky, 1981, 1982), for a par- ticular free-word order language, Warlpiri, an aboriginal language of central Australia. The parser is explicitly de- signed to transparently mirror the principles of GB. The operation of this parsing system is quite different in character from that of a rule-based parsing system, ~ e.g., a context-free parsing method. In this system, phrases are constructed via principles of selection, case-marking, case- assignment, and argument-linking, rather than by phrasal rules. The output of the parser for a sample Warlpiri sentence of four words in length is given. The parser was executed on each of the 23 other permutations of the sentence, and it output equivalent parses, thereby demonstrating its ability to correctly handle the highly scrambled sentences found in Warlpiri. INTRODUCTION Basing a parser on Government-Binding theory has led to a design that is quite different from traditional algo- rithms. 1 The parser presented here operates in two stages, lexical and syntactic. Each stage is carried out by the same parsing engine. The lexical parser projects each con- stituent lexical item (morpheme) according to information in its associated lexical entries. Lexical parsing is highly data-driven from entries in the lexicon, in keeping with GB. Lexical parses returned by the first stage are then handed over to the second stage, the syntactic parser, as input, where they are further projected and combined to form the final phrase marker. Before plunging into the parser itself, a sample Warl- piri sentence is presented. Following this, the theory of ar- gument (i.e., NP) identification is given, in order to show how its substantive linguistic principles may be used di- rectly in parsing. Both the lexicon and the other basic data structures are then discussed, followed by a descrip- tion of the central algorithm, the parsing engine. Lexical phrase-markers produced by the parser for the words kur- 1 Johnson (1985} reports another design for analyzing discontinuous constituents; it is not grounded on any linguistic theory, however. duku and puntarni are then given. Finally, the syntactic phrase-marker for the sample sentence is presented. All the phrase-markers shown are slightly edited outputs of the implemented program. A SAMPLE SENTENCE In order to make the presentation of the parser a little less abstract, a sample sentence of Warlpiri is shown in (1): (1) Ngajulu-rlu ka-rna-rla punta-rni kurdu-ku karli. I-ERG PRES-1-3 take-NPST child-DAT boomerang 'I am taking the boomerang from the child.' (The hyphens are introduced for the nonspeaker of Warlpiri in order to clearly delimit the morphemes.) The second word, karnarla, is the auxiliary which must appear in the second (Wackernagel's) position. Except for the auxiliary, the other words may be uttered in any order; there are 4! ways of saying this sentence. The parser assumes that the input sentence can l~e bro- ken into its constituent words and morphemes. ~ Sentence (1) would be represented as in (2). The parser can not yet handle the auxiliary, so it has been omitted from the input. ((NGAJULU RLU) (PUNTA RNI) (KURDU KU) (KARLI)) ARGUMENT IDENTIFICATION Before presenting the lexicon, GB argument identifica- tion as it is construed for the parser is presented? Case is used to identify syntactic arguments and to link them to their syntactic predicates {e.g., verbal, nominal and in- finitival). There are three such cases in Warlpiri: ergative, absolutive and dative. Argument identification is effected by four subsystems involving case: selection, case-marking, case-assignment, and argument-linking. Only maximal projections (e.g., NP and VP, in English) are eligible to be arguments. In order ~Barton (1985) has written a morphological analyzer that breaks down Warlpiri words in their constituent morphemes. We have con- nected both parsers so that the user is able to enter sentences in a less stilted form. Input (2), however, is given directly to the main parser, bypassing Barton's analyzer. ZThis analysis of Warlpiri comes from several sources, and from the helpful assistance of Mary Laughren. See, for example, (Laughren, 1978; Nash, 1980; Hale, 1983). 60 P kurdu- ku THE LEXICON The actions for performing argument identification~ as well as the data on which they operate, are stored for each lexical item in the lexicon• The part of the lexicon neces- sary to parse sentence (2) is given in figure 2. The lexicon is intended to be a transparent encoding Figure 1: An example of argument identification• for such a category to be identified as an argument, it must be visible to each of the four subsystems. That is, it must qualify to be selected by a case-marker, marked for its case, assigned its ease, and then linked to an argument slot demanding that case. Selection is a directed action that, for Warlpiri, may take the category preceding it as its object. This follows from the setting of the head parameter of GB: Warlpiri is a head-final language• Selection involves a co-projection of the selector and its object, where both categories are projected one level• For example, the tensed element, rni, selects verbs, and then co-projects to form the combined "inflected verb" category• An example is presented below• The other three events occur under the undirected struc- tural relation of siblinghood. That is, the active category (e.g., case-marker) must be a sibling of the passive cate- gory (e.g., category being marked for the case). Consider figure 1. The dative case-marker, ku, se- lects its preceding sibling, kurdu, for dative case. Once co-projected, the dative case-marker may then mark its selected sibling for dative case. Because ku is also a case- assigner, and because kurdu has already been marked for dative case, it may also be assigned dative case. The projected category may then be linked to dative case by punta-rni which links dative arguments to the source the- matic (0) role because it has been assigned dative case. In this example, the dative case-marker performed the first three actions of argument identification, and the verb per- formed the last. Note that only when kurdu was selected for case was precedence information used; case-marking, case-assignment and argument-linking are not directional. In this way, the fixed-morpheme order and free-word order have been properly accounted for. (KARLI (datum (v -)) (datum (n +))) (KU (action (assign dative)) (action (mark dative)) (action (select (dative ((v . -) (n . ÷))))) (datum (case dative)) (datum (percolate t))) (KURDU (datum (v -)) (datum (n *))) (NGAJULU (datum (v -)) (datum (n +)) (datum (person i)) (datum (number singular))) (PUNTA (datum (v *)) (datum (n-)) (datum (conjugation 2)) (datum (theta-roles (agent theme source)))) (RLU (action (mark ergative)) (action (select (ergative ((v . -) (n . *))))) (datum (case ergative)) (datum (percolate t))) (RNI (action (assign absolutive)) (action (select (+ ((v . +) (n . -) (conjugation . 2))))) (datum (ins +)) (datum (tense nonpast))) Figure 2: A portion of the lexicon. 61 of the linguistic knowledge. CONJUGATION stands for the conjugation class of the verb; in Warlpiri there are five conjugation classes. SELECT takes a list of two arguments. The first is the element that will denote selection; in the case of a grammatical case-marker, it is the grammatical case. The second argument is the list of data that the prospective object must match in order to be selected. For example, rlu requires that its object be a noun in order to be selected. The representation for a lexicon is simply a list of morpheme-value pairs; lookup consists simply of searching for the morpheme in the lexicon and returning the value associated with it. The associated value consists of the information that is stored within a category, namely, data and actions. Only the information that is lexically deter- mined, such as person and number for pronouns, is stored in the lexicon. There is another class of lexical information, lexical rules, which applies across categories. For example, all verbs in Warlpiri with an agent 0-role assign ergative case. Since this case-assignment is a feature of all verbs, it would not be appropriate to store the action in each verbal entry; instead, it stated once as a rule. These rules are repre- sented straightforwardly as a list of pattern-action pairs. After lexical look-up is performed, the list of rules is ap- plied. If the pattern of the rule matches the category, the rule fires, i.e., the information specified in the "action" part of the rule is added to the category. For an example, see the parse of the inflected verb, puntarni, in figure 4, below. THE BASIC DATA STRUCTURES The basic data structure of the parsing engine is the projection, which is represented as a tree of categories. Both dominance and precedence information is recorded explicitly. It should be noted, however, that the precedence relations are not considered in all of the processing; they are taken into account only when they are needed, i.e., when a category is being selected. While the phrase-marker is being constructed there may be several independent projections that have not yet been connected, as, for example, when two arguments have preceded their predicate. For this reason, the phrase-mar- ker is represented as a forest, specifically with an array of pointers to the roots of the independent projections. An array is used in lieu of a set because the precedence infor- mation is needed sometimes, i.e., when selecting a cate- gory, as above. These two structures contain all of the necessary struc- tural relations for parsing. However, in the interests of ex- plicit representation and speeding up the parser somewhat, two auxiliary structures are employed. The argument set points to all of the categories in the phrase-marker that may serve as arguments to predicates. Only maximal pro- jections may be entered in this set, in keeping with X- theory. Note that a maximal projection may serve as an argument of more than one predi(:ate, so that a category is never removed from the argument set. The second auxiliary structure is the set of unsatis- fied predicates, which points to all of the categories in the phrase-marker that have unexecuted actions. Unlike the argument set, when the actions of a predicate are executed, the category is removed from the set. The phrase-marker contains all of the structural re- lations required by GB; however, there is much more in- formation that must be represented in the output of the parser. This information is stored in the feature-value lists associated with each category. There are two kinds of fea- tures: data and actions. There may be any number of data and actions, as dictated by GB; that is, the representation does not constrain the data and actions. The actions of a category are found by performing a look-up in its feature- value list. On the other hand, the data for a category are found by collecting the data for itself and each of the sub- categories in its projection in a recursive manner. This is done because data are not percolated up projections. The list of actions is not completely determined. Se- lection, case-marking, case-assignment, and argument link- ing are represented as actions (el. the discussion of case, above). It should be noted that these are the only actions available to the lexicon writer. Actions do not consist of arbitrary code that may be executed, such as when an arc is traversed in an ATN system. The supplied actions, as derived from GB, should provide a comprehensive set of linguistically relevant operations needed to parse any sen- tence of the target language. Although the list of data types is not yet complete, a few have already proved necessary, such as person and number information for nominal categories. The list of 0- roles for which a predicate subcategorizes is also stored as data for the category. THE PARSING ENGINE The parsing engine is the core of both the lexical and the syntactic parsers. Therefore, their operations can be described at the same time. The syntactic parser is just the parsing engine that accepts sentences (i.e., lists of words) as input, and returns syntactic phrase-markers as output. The lexical parser is just the parsing engine that accepts words (i.e., lists of morphemes) as input, and returns lex- ical phrase-markers as output. The engine loops through each component of the input, performing two computations. First it calls its subordinate parser (e.g., the lexical parser is the subordinate parser of the syntactic parser) to parse the component, yielding a phrase-marker. (The subordinate parser for the lexical parser performs a look-up of the morpheme in the lexicon.) In the second computation, the set of unsatisfied predicates is traversed to see if any of the predicates' actions can 52 apply. This is where selection, case-marking, projection, and so on, are performed. Note that there is no possible ambiguity during the identification of arguments with their predicates. This stems from the fact that selection may only apply to the (single) category preceding the predicate category, and that each of the subsequent actions may only apply se- rially. This assumes single-noun noun phrases. In the next version of the parser, multiple-noun noun phrases will be tackled. However, the addition of word stress information will serve to disambiguate noun grouping. There may be ambiguity in the parsing of the mor- phemes. That is, there may be more than one entry for a single morpheme. The details of this disambiguation are not clear. One possible solution is to split the parsing process into one process for each entry, and to let each daughter process continue on its own. This solution, how- ever, is rather brute-force and does not take advantage of the limited ambiguity of multiple lexical entries. For the moment, the parser will assume that only unambiguous morphemes are given to it. After the loop is complete, the engine performs default actions. One example is the selection for and marking of absolutive case. In Warlpiri, the absolutive case-marker is not phonologically overt. The absolutive case-marker is left as a default, where, if a noun has not been marked for a case upon completion of lexical parsing, absolutive case is marked. This is how karli is parsed in sentence (2); see figures 6 and 7, below. The next operation of the engine is to check the well- formedness of the parse. For both the lexical parser and the syntactic parser, one condition is that the phrase-mar- ker consist of a single tree, i.e., that all constituents have been linked into a single structure. This condition sub- sumes the Case Filter of GB. In order for a noun phrase to be linked to its predicate it must have received case; any noun phrase that has not received case will not be linked to the projection of the predicate, and the phrase-marker will not consist of a single tree. The last operation percolates unexecuted actions to the root of the phrase-marker, for use at the next higher level of parsing. For example, the assignment of both erga- tive case and absolutive case in the verb puntarni are not executed at the lexical level of parsing. So, the actions are percolated to the root of the phrase-marker for the con- jugated verb, and are available for syntactic parsing. In the parse of sentence (2), they are, in fact, executed at the syntactic level. TWO PARSED WORDS The parse of kurduku, meaning 'child' marked for da- tive case, is presented in figure 3. It consists of a phrase- marker with a single root, corresponding to the declined noun. It has two children, one of which is the noun, kurdu, and the other the case-marker, ku. O: actions: ASSIGN: DATIVE MARK: DATIVE SELECT: (DATIVE ((V . -) projection?: NIL children: O: data: ASSIGN: DATIVE MARK: DATIVE SELECT: DATIVE TIME: 1 MORPHEME: KURDU N: ÷ V: - projection?: T I: data: TIME: 2 MORPHEME: KU PERCOLATE: T CASE: DATIVE projection?: T (N . *))) Figure 3: The parse of kurduku. One can see that all three actions of the case-marker have executed. The selection caused the noun, kurdu, and the case-marker, ku, to co-project; furthermore, the noun was marked as selected (SELECT: DATIVE appears in its data). Marking and assignment also are evident. Note that all three actions percolated up the projection. This is due to the PERCOLATE: T datum for ku, which forces the actions to percolate instead of simply being deleted upon execution. The actions of case-markers percolate be- cause they can be used in complex noun phrase formation, marking nouns that precede them at the syntactic level. This phenomenon has not yet been fully implemented. The TIME datum is used simply to record the order in which the morphemes appeared in the input so that the prece- dence information may be retained in the parse. One more note: the PROJECTION? field is true when the category's parent is a member of its projection, and false when it isn't. Because the top-level category in the phrase-marker is a projection of both subordinate categories, the PRO- JECTION? entries for both of them are true. In figure 4, the parse of puntarni is shown. There is much more information here than was present for each of the lexical entries for the verb, punta, and the tensed ele- ment, rni. The added information comes from the appli- cation of lexical rules, mentioned above. These rules first associate the 8-roles with their corresponding cases, as can be seen in the data entry for punta. Second," they set up the INTERNAL and EXTERNAL actions which project one and two levels, respectively, in syntax. That is, the agent, which will be marked with ergative case, will fill the subject position; the theme and the source, which will be marked with absolutive and dative cases, will fill the object posi- tions. 63 O: actions: ASSIGN: ABSOLUTIVE INTERNAL: SOURCE INTERNAL: THEME EXTERNAL: AGENT ASSIGN: ERGATIVE projection?: NIL children: 0: data: SELECT: + TIME: 1 THEME: ABSOLUTIVE SOURCE: DATIVE AGENT: ERGATIVE MORPHEME: PUNTA THETA-ROLES: (AGENT THEME SOURCE) CONJUGATION: 2 N: - V: ÷ projection?: T l: data: TIME: 2 MORPHEME: RNI TENSE: NONPAST TNS: + projection?: T Figure 4: The parse of puntarni. A PARSED SENTENCE The phrase-marker for sentence (2) is given in figure 5. The corresponding parse for this sentence is shown in fig- ures 6 and 7, the actual output of the parser. In the parse, the verb has projected two levels, as per its projection ac- tions, INTERNAL and EXTERNAL. These two actions are particular to the syntactic parser, which is why they were not executed at the lexical level when they were intro- duced. INTERNAL causes the verb to project one level, and inserts the LINK action for the object cases. EXTERNAL causes a second level of projection, and inserts the LINK action for the subject case. Note that the TIME informa- tion is now stored at the level of lexical projections; these are the times when the lexical projections were presented to the syntactic parser. To demonstrate the parser's ability to correctly parse free word order sentences, the other 23 permutations of sentence (2) were given to the parser. The phrase-mar- kers constructed, omitted here for the sake of brevity, were equivalent to the phrase-marker above. That is, except for the ordering of the constituents, the domination relations were the same: the noun marked for ergative case was in all cases the subject, associated with the agent 8-role; and the nouns marked for absolutive and dative cases were in all cases the objects, associated with the theme and source 8-roles, respectively. punta- rni kurdu- karli ku CONCLUSION We have presented a currently implemented parser that can parse some free-word order sentences of Warlpiri. The representations (e.g., the lexicon and phrase-markers) and algorithms (e.g., projection, undirected case-marking, and the directed selection) employed are faithful to the linguis- tic theory on which they are based. This system, while quite unlike a rule-based parser, seems to have the po- tential to correctly analyze a substantial range of linguis- tic phenomena. Because the parser is based on linguistic principles it should be more flexible and extendible than rule-based systems. Furthermore, such a parser may be changed more easily when there are changes in the lin- guistic theory on which it is based. These properties give the class of principle-based parsers greater promise to ul- timately parse full-fledged natural language input. Figure 5: The phrase-marker for sentence (2). 64 O: projection?: NIL children: O: actions: MARK: ERGATIVE SELECT: (ERGATIVE ((V . -) (N . +))) data: LINK: ERGATIVE ASSIGN: ERGATIVE TIME: 1 projection?: NIL children: O: data: MARK: ERGATIVE SELECT: ERGATIVE MORPHEME: NGAJULU NUMBER: SINGULAR PERSON: 1 N: + V: - projection?: T 1: data: MORPHEME: RLU PERCOLATE: T CASE: ERGATIVE projection?: T I: projection?: T children: O: data: TIME: 2 projection?: T children: O: data: SELECT: + THEME: ABSOLUTIVE SOURCE: DATIVE AGENT: ERGATIVE MORPHEME: PUNTA THETA-ROLES: (AGENT THEME SOURCE) CONJUGATION: 2 N: - V: ÷ projection?: T i: data: MORPHEME: RNI TENSE: NONPAST TNS: ÷ projection?: T I: actions: ASSIGN: DATIVE MARK: DATIVE SELECT: (DATIVE ((V . -1 (N . +111 data: LINK: DATIVE TIME: 3 projection?: NIL children: O: data: ASSIGN: DATIVE MARK: DATIVE SELECT: DATIVE MORPHEME: KURDU N: + V: - projection?: T 1: data: MORPHEME: KU PERCOLATE: T CASE: DATIVE projection?: T 2: data: LINK: ABSOLUTIVE ASSIGN: ABSOLUTIVE TIME: 4 MARK: ABSOLUTIVE SELECT: ABSOLUTIVE MORPHEME: KARLI N: + V: - projection?: NIL Figure 7: The second half of the parse of sentence (2). Figure 6: The first half of the parse of sentence (2). 65 ACKNOWLEDGMENTS This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intel- ligence research has been provided in part by the Ad- vanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014- 80-C-0505. I wish to thank my thesis advisor, R~bert Berwick, for his helpful advice and criticisms. I also wish to thank Mary Laughren for her instruction on Warlpiri without which I would not have been able to create this parser. REFERENCES Barton, G. Edward (1985). "The Computational Com- plexity of Two-level Morphology," A.I. Memo 856, Cam- bridge, MA: Massachusetts Institute of Technology. Chomsky, Noam (1981). Lectures on Government and Binding, the Pisa Lectures, Dordrecht, Holland: Foris Publications. Chomsky, Noam (1982). Some Concepts and Consequences of the Theory of Government and Binding, Cambridge, MA: MIT Press. Hale, Ken (1983). "Warlpiri and the Grammar of Non- configurational Languages," Natural Language and Lin- guistic Theory, pp. 5-47. Johnson, Mark (1985). "Parsing with Discontinuous Con- stituents," 28rd Annual Proceedings of the Association for Computational Linguistics, pp. 127-32. Laughren, Mary (1978). "Directional Terminology in Warl- piri, a Central Australian Language," Working Papers in Language and Linguistics, Volume 8, pp. 1-16. Nash, David (1980). "Topics in Warlpiri Grammar," Ph.D. Thesis, M.I.T. Department of Linguistics and Philoso- phy. 66
1986
10
D.J. Weir K.Vijay-Shanker A.K. Joshi Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 Abstract 67 We examine the relationship between the two grammatical formalisms: Tree Adjoining Grammars and Head Gram- mars. We briefly investigate the weak equivalence of the two formalisms. We then turn to a discussion comparing the linguistic expressiveness of the two formalisms. 1 Introduction Recent work [9,3] has revealed a very close formal rela- tionship between the grammatical formalisms of Tree Ad- joining Grammars (TAG's) and Head Grammars (HG's). In this paper we examine whether they have the same power of linguistic description. TAG's were first intro- duced in 1975 by Joshi, Levy and Takahashi[1] and inves- tigated further in [2,4,8]. HG's were first introduced by Pollard[5]. TAG's and HG's were introduced to capture certain structural properties of natural languages. These formalisms were developed independently and are nota- tionally quite different. TAG's deal with a set of elemen- tary trees composed by means of an operation called ad- joining. HG's maintain the essential character of context- free string rewriting rules, except for the fact that besides concatenation of strings, string wrapping operations are permitted. Observations of similarities between proper- ties of the two formalisms led us to study the formal rela- tionship between these two formalisms and the results of this investigation are presented in detail in [9,3]. We will briefly describe the formal relationship established in [9,3], showing TAG's to be equivalent to a variant of HG's. We argue that the relationship between HG's and this variant of HG's called Modified Head Grammars (MHG's) is very close. Having discussed the question of the weak equivalence of TAG's and HG's, we explore, in Sections 4 and 5, what might be loosely described as their strong equivalence. Sec- tion 4 discusses consequences of the substantial notational differences between the two formalisms. In Section 5, with the use of several examples of analyses (that can not be t This work was partially supported by the NSF grants MCS-82-19116-CER, MCS-82-07294 and DCR-84-10413. We are grateful to Tony Kroch and Carl Pollard, both of whom have made valuable contributions to this work. given by CFG's), we attempt to give cases in which they have the ability to make similar analyses as well as situa- tions in which they differ in their descriptive power. 1.1 Definitions In this section, we shall briefly define the three formalisms: TAG's, HG's, and MHG's. 1.1.1 Tree Adjoining Grammars Tree Adjoining Grammars differs from string rewriting sys- tems such as Context Free Grammars in that they generate trees. These trees are generated from a finite set of so- called elementary trees using the operation of tree ad- junction. There are two types of elementary trees: initial and auxiliary. Linguistically, initial trees correspond to phrase structure trees for basic sentential forms, whereas auxiliary trees correspond to modifying structures. The nodes in the frontier of elementary trees are la- belled by terminal symbols except for one node in the fron- tier of each auxiliary tree, the foot node, which is labelled by the same nonterminal symbol as the root. Since initial trees are sentential, their root is always labelled by the nonterminal S. We now describe the adjoining operation. Suppose we adjoin an auxiliary tree ~ into a sentential tree 7. The label of the node at which the adjoining operation takes place must be the same as the label of the root (and foot) of ~. The subtree under this node is excised from 7, the auxiliary tree ~ is inserted in its place and the excised subtree replaces the foot of 8- Thus the tree obtained after adjoining j3 is as shown below. 5 /3:x s v • I The Relationship Between Tree Adjoining Grammars And Head Grammarst The definition of adjunction allows for more complex constraints to be placed on adjoining. Associated with each node is a selective adjoining (SA) constraint spec- ifying that subset of the auxiliary tree which can be ad- joined at this node. If the SA constraint specifies an empty subset of trees, then we call this constraint the Null Ad- joining (NA) constraint, ff the SA constraint specifies the entire set of auxiliary tree whose root is labelled with the appropriate nonterminal, then by convention we will not specify the SA constraint. We also allow obligatory adjolning(OA) constraints at nodes, to ensure that an ad- junction is obligatorily performed at these nodes. When we adjoin an auxiliary tree f~ in a tree ~ those nodes in the resulting tree that do not correspond to nodes of fl, retain those constraints appearing in "1. The remaining nodes have the same constraints as those for the corresponding nodes of ft. 1.1.2 Head Grammars Head Grammars are string rewriting systems like CFG's, but differ in that each string has a distinguished symbol corresponding to the head of the string. These are there- fore called headed strings. The formalism allows not only concatenation of headed strings but also so-called head wrapping operations which split a string on one side of the head and place another string between the two sub- strings. We use one of two notations to denote headed strings: when we wish to explicitly mention the head we use the representation w~-Sw~; alternatively, we simply de- note a headed string by ~. Productions in a HG are of the form A -* f(al ..... a,) or A ~ ax where: A is a nonter- minal; a~ is either a nonterminal or a headed string; and f is either a concatenation or a head wrapping operation. Roach[6] has shown that there is a normal form for Head Grammars which uses only the following operations. LCl(ul-d71u2, vx-d-~2v2) LC2(Ul"d~lu2, ~ 1~-2 ?.)2 ) LLl(ul'd-[u2, u1~22 ~2) LL2(uxh'71u2, vlh-~2v2) LR1(ul-d71u2, vx-d-iv2) LR2 (ux~'lu2, vx'4-~v2) = tt 1~1"1 t/2 t~la2 U 2 : ~1~1~/,2~)1~)2 : ttl~llU1a2u2u 2 : tt 10,1u1~22 ~)2 u 2 : ttll)la2U2~lU, 2 : Ul ~)1~2 t12 QI ~/, 2 1.1.3 Modified Head Grammars Pollard's definition of headed strings includes the headed empty string (~). However the term fi(~-~,... ,~-~,... ,W--~n) is undefined when ~-~ = ~. This nonuniformity has led to difficulties in proving certain formal properties of HG's[6]. MHG's were considered to overcome these problems. Later in this paper we shall argue that MHG's are not only close to HG's formally, but also that they can be given a linguis- tic interpretation which retains the essential characteristics of HG's. It is worth noting that the definition of MHG's given here coincides with the definition of HG's given in Instead of headed strings, MHG's use so-called split strings. Unlike a headed string which has a distinguished symbol, a split string has a distinguished position about which it may be split. In MHG's, there are 3 operations on split strings: W, C1, and C2. The operations C1 and C2 correspond to the operations LC1 and LC2 in HG's. They are defined as follows: CI(toITW2, UlTU2 ) = t01TW2UlU 2 C2(WlTW2, u1Tu2) : t/)lt/)2UlTU2 Since the split point is not a symbol (which can be split either to its left or right) but a position between strings, separate left and right wrapping operations are not needed. The wrapping operation, W, in MHG is defined as follows: W(UAll-W2, Ul~'U2) = t/]lUlTU2W2 We could have defined two operations W1 and W2 as in HG. But since W1 can very easily be simulated with other operations, we require only W2, renamed simply W. 2 MHG's and TAG's In this section, we discuss the weak equivalence of TAG's and MHG's. We will first consider the relationship between the wrapping operation W of MHG's and the adjoining operation of TAG's. 2.1 Wrapping and Adjoining The weak equivalence of MHG's and TAG's is a conse- quence of the similarities between the operations of wrap- ping and adjoining. It is the roles played by the split point and the foot node that underlies this relationship. When a tree is used for adjunction, its foot node determines where the excised subtree is reinserted. The strings in the fron- tier to the left and right of the foot node appear on the left and right of the frontier of the excised subtree. As shown in the figure below, the foot node can be thought of as a position in the frontier of a tree, determining how the string in the frontier is split. ~°o~ v,~vz ~'oot 68 Adjoining in this case, corresponds to wrapping to,Tw 2 around the split string v,tv2. Thus, the split point and the foot node perform the same role. The proofs show- ing the equivalence of TAG's and MHG's is based on this correspondence. 2.2 Inclusion of TAL in MHL We shall now briefly present a scheme for transforming a given TAG to an equivalent MHG. We associate with each auxiliary tree a set of productions such that each tree gen- erated from this elementary tree with frontier wiXw2 has. an associated derivation in the MHG, using these produc- tions, of the split string WlTW2. The use of this tree for adjunction at some node labelled X can be mimicked with a single additonal production which uses the wrapping op- eration. For each elementary tre~ we return a sequence of pro- ductions capturing the structure of the tree in the following way. We use nonterminals that are named by the nodes of elementary trees rather than the labels of the nodes. For each node ~/in an elementary tree, we have two nontermi- nal X. and I".: X. derives the strings appearing on the frontier of trees derived from the subtree rooted at r/; Y, derives the concatenation of the strings derived under each daughter of 7. If ~/has daughters rh,... ,~k then we have the production: Y, --, Ci(X.~, . . . , X.J where the node T/i dominates the foot node (by convention, we let i = 1 if r/does not dominate the foot node). Adjunc- tion at ~/, is simulated by use of the following production: X. -~ W(X~, r.) where # is the root of some auxiliary tree which can be adjoined at ~/. If adjunction is optional at y/then we include the production: X,-~ Y,. Notice that when T/has an NA or OA constraint we omit the second or third of the above productions, respectively. Rather than present the full details (which can be found in [9,3]) we illustrate the construction with an example showing a single auxiliary tree and the corresponding MHG productions. CI\ Xr/l ~ Y~I ) Y,~ ~ c2(~,x.,), X., -~ W(X~,,,Y..), x,. --. w(x~, r,.). x,.--. Y,.. r,, --, c2(b, x.,~) x.,-~ Y.. Y.. -, A where #1,..., #, are the roots of the auxiliary trees adjoin- able at ~=. 2.3 Inclusion of MHL in TAL In this construction we use elementary trees to directly simulate the use of productions in MHG to rewrite nonter- minals. Generation of a derivation tree in string-rewriting systems involves the substitution of nonterminal nodes, ap- pearing in the frontier of the unfinished derivation tree, by trees corresponding to productions for that no nterminal. From the point of view of the string languages obtained, tree adjunction can be used to simulate substitution, as illustrated in the following example. X Notice that although the node where adjoining occurs does ' not appear in the frontier of the tree, the presence of the node labelled by the empty string does not effect the string language. For each production in the MHG we have an auxiliary tree. A production in an MHG can use one of the three operations: C1, C2, and W. Correspondingly we have three types of trees, shown below. AS A~ f A# 5~ C~ I 6 ~ Co~ I I c oA I I i ~, A# 69 Drawing the analogy with string-rewriting systems: NA constraints at each root have the effect of ensuring that a nonterminal is rewritten only once; NA constraints at the foot node ensures that, like the nodes labelled by A, they do not contribute to the strings derived; OA constraints are used to ensure that every nonterminal introduced is rewritten at least once. The two trees mimicking the concatenation operations differ only in the position of their foot node. This node is positioned in order to satisfy the following requirement: for every derivation in the MHG there must be a derived tree in the TAG for the same string, in which the foot is positioned at the split point. The tree associated with the wrapping operation is quite different. The foot node appears below the two nodes to be expanded because the wrapping operation of MHG's corresponds to the LL2 operation of HG's in which the head (split point) of the second argument becomes the new head (split point). Placement of the nonterminal, which is to be wrapped, above the other nonterminal achieves the desired effect as described earlier. While straightforward, this construction does not cap- ture the linguistic motivation underlying TAG's. The aux- iliary trees directly reflect the use of the concatenation and the wrapping operations. As we discuss in more detail in Section 4, elementary trees for natural languages TAG's are constrained to capture meaningful linguistic structures. In the TAG's generated in the above construction, the el- ementary trees are incomplete in this respect: as reflected by the extensive use of the OA constraints. Since HG's and MHG's do not explicitly give minimal linguistic struc- tures, it is not surprising that such a direct mapping from MHG's to TAG's does not recover this information. 3 HG's and MHG's In this section, we will discuss the relationship between HG's and MHG's. First, we outline a construction show- ing that HL's are included in MHL's. Problems arise in showing the inclusion in the other direction because of the nonuniform way in which HG's treat the empty headed string. In the final part of this section, we argue that MHG's can be given a meaningful linguistic interpretation, and may be considered essentially the same as HG's. 3.1 HL's and MHL's The inclusion of HL's in MHL's can be shown by con- structing for every HG, G, an equivalent MHG, G'. We now present a short description of how this construction proceeds. Suppose a nonterminal X derives the headed string wlhw2. Depending on whether the left or right wrapping operation is used, this headed string can be split on ei- ther side of the head. In fact, a headed string can be split first to the right of its head and then the resulting string can be split to the left of the same head. Since in MHG's we can only split a string in one place, we introduce non- terminals X ~h, that derive split strings of the form wi~w2 whenever X derives wl-hw2 in the HG. The missing head can be reintroduced with the following productions: x' -~ w(x '~, hT) and X" -~ W(X '~,,h) Thus, the two nonterminals, X t and X r derive WlhTW 2 and wlThw2 respectively. Complete details of this proof are given in [3]. We are unable to give a general proof showing the in- clnsion of MHL's in HL's. Although Pollard[5] allows the use of the empty headed string, mathematically, it does not have the same status as other headed strings. For exam- pie, LCI(~,E) is undefined. Although we have not found any way of getting around this in a systematic manner, we feel that the problem of the empty headed string in the HG formalism does not result from an important difference between the formalisms. For any particular natural language, Head Grammars for that language appear to use either only the left wrap- ping operations LLi, or only the right wrapping operations LRi. Based on this observation, we suggest that for any HG for a natural language, there will be a corresponding MHG which can be given a linguistic interpretation. Since headed strings will always be split on the same side of the head, we can think of the split point in a split string as determining the head position. For example, split strings generated by a MHG for a natural language that uses only the left wrapping operations have their split points imme- diately to the right of the actual head. Thus a split point in a phrase not only defines where the phrase can be split, but also the head of the string. 4 Notational Differences between TAG's and HG's TAG's and HG's are notationally very different, and this has a number of consequences that influence the way in which the formalisms can be used to express various as- pects of language structure. The principal differences de- rive from the fact that TAG's are a tree-rewriting system unlike HG's which manipulate strings. The elementary trees in a TAG, in order to be linguisti- cally meaningful, must conform to certain constraints that are not explicitly specified in the definition of the formal- 70 ism. In particular, each elementary tree must constitute a minimal linguistic structure. Initial trees have essen- tially the structure of simple sentences; auxiliary trees cor- respond to minimal recursive constructions and generally constitute structures that act as modifiers of the category appearing at their root and foot nodes. A hypothesis that underlies the linguistic intuitions of TAG's is that all dependencies are captured within elemen- tary trees. This is based on the assumption that elemen- tary trees are the appropriate domain upon which to define dependencies, rather than, for example, productions in a Context-free Grammar. Since in string-rewriting systems, dependent lexical items can not always appear in the same production, the formalism does not prevent the possibility that it may be necessary to perform an unbounded amount of computation in order to check that two dependent lex- ical items agree in certain features. However, since in TAG's dependencies are captured by bounded structures, we expect that the complexity of this computation does not depend on the derivation. Features such as agreement may be checked within the elementary trees (instantiated up to lexical items) without need to percolate information up the derivation tree in an unbounded way. Some check- ing is necessary between an elementary tree and an auxil- iary tree adjoined to it at some node, but this checking is still local and unbounded. Similarly, elementary trees, be- ing minimal linguistic structures, should capture all of the sub-categorization information, simplifying the processing required during parsing. Further work (especially empiri- cal) is necessary to confirm the above hypothesis before we can conclude that elementary trees can in fact capture all the necessary information or whether we must draw upon more complex machinery. These issues will be discussed in detail in a later paper. Another important feature of TAG's that differentiates them from HG's is that TAG's generate phrase-structure trees. As a result, the elementary trees must conform to certain constraints such as left-to-right ordering and lin- guistically meaningful dominance relations. Unlike other string-rewriting systems that use only the operation of con- catenation, HG's do not associate a phrase-structure tree with a derivation: wrapping, unlike concatenation, does not preserve the word order of its arguments. In the Sec- tion 5, we will present an example illustrating the impor- tance of this difference between the two formalisms. It is still possible to associate a phrase-structure with a derivation in HG's that indicates the constituents and we use this structure when comparing the analyses made by the two systems. These trees are not really phrase- structure trees but rather trees with annotations which indicate how the constituents will be wrapped (or concate- nated). It is thus a derivation structure, recording the his- tory of the derivation. With an example we now illustrate how a constituent analysis is produced by a derivation in a HG. NP l N VP gl~l / \ V S L (:::,~ I /\ ~o~ NP VP I i Iv V 1 i 5 Towards "Strong" equivalence In Section 2 we considered the weak equivalence of the two formalisms. In this section, we will consider three exam- ples in order to compare the linguistic analyses that can be given by the two formalisms. We begin with an ex- ample (Example 1) which illustrates that the construction given in Section 2 for converting a TAG into an MHG gives similar structures. We then consider an example (Exam- ple 2) which demonstrates that the construction does not always preserve the structure. However, there is an al- ternate way of viewing the relationship between wrapping and adjoining, which, for the same example, does preserve the structure. Although the usual notion of strong equivalence (i.e., equivalence under identity of structural descriptions) can not be used in comparing TAG and HG (as we have already indicated in Section 4), we will describe informally what the notion of "strong" equivalence should be in this case. We then illustrate by means of an example (Example 3), how the two systems differ in this respect. 5.1 Example 1 Pollard[5] has suggested that HG can be used to provide an appropriate analysis for easy problems to solve. He does not provide a detailed analysis but it is roughly as follows. NP LL2 J AP NP /\ I 71 This analysis can not be provided by CFG's since in de- riving easy to solve we can not obtain easy to solve and problems as intermediate phrases. The appropriate ele- mentary tree for a TAG giving the same analysis would be: / NP AP ICP 5 I \ ' t,o sol~ H I Note that the phrase easy to solve wraps around problems by splitting about the head and the foot node in both the grammars. Since the conversion of this TAG would result in the HG given above, this example shows that the construction captures the correct correspondence between the two formalisms. 5.2 Example 2 We now present an example demonstrating that the con- struction does not always preserve the details of the lin- guistic analysis. This example concerns cross-serial depen- dencies, for example, dependencies between NP's and V's in subordinate clauses in Dutch (cited frequently as an example of a non-context-free construction). For example, the Dutch equivalent of John saw Mary swim is John Mary saw swim. Although these dependencies can involve an ar- bitrary number of verbs, for our purposes it is sufficient to consider this simple case. The elementary trees used in a TAG, GTAa, generating this sentence are given below. S VP 5 VP /5\ I / \ I IVP VP ,V HP V'P V i i /I. I /\ I N V.--" ,~u,~n ,saw The HG given in [5] (GHa) assigns the following deriva- tion structure (an annotated phrase-structure recording the history of the derivation) for this sentence. N~ I W I S / ~-C2 ~VP ~al /\ V 3 ~c2 I /\ saw NP V P I I N V I i If we use the construction in Section 2 on the elemen- tary trees for the TAG shown above, we would generate an HG, G~a , that produces the following analysis of this sentence. /°\ I NP ?X~cz V I I H NP VP ~o~ ki,j N v .I I This does not give the same analysis as G~za: both G~a and GrAa give intermediate structures in which the predi- cate help(Mary swim) is formed. This then combines with the noun phrase John giving the resulting sentence. In the HG G~a John is first combined with Mary swim: this is not an acceptable linguistic structure. G~a corresponds in this sense to the following unacceptable TAG, GITAG. /3~ /S\ NP vP NP VP W , 6 '/ H, S V\ I :/\ ,'1 It/\ ~, "NP VP [ e, f,l,~ \~'r vP e.~ ~a~ j b" J /\, I ., ¢ S V' 1 f 5W~ ,~aW 72 Not only does the ~onstruction map the acceptable TAG to the unacceptable HG; hut it can also be shown that the unacceptable TAG is converted into the accept- able HG. This suggests that our construction does not al- ways preserve linguistic analyses. This arises because the use of wrapping operation does not correspond to the way in which the foot node splits the auxiliary tree in this case. However, there is an alternate way of viewing the manner in which wrapping and adjoining can be related. Consider the following tree. IIIIIIT'X~ ,:,,::./\\ u., Instead of wrapping WlW 2 around Ul and then concate- nating us; while deriving the string wxulw2u2 we could derive the string by wrapping UlU2 around w2 and then concatenating wl. This can not be done in the general case (for example, when the string u is nonempty). The two grammars GHa and GTA a can be related in this manner since GTAG satisfies the required conditions. This approach may be interpreted as combining the phrase ulu2 with w~ to form the phrase UlW2U~. Relating the above tree to Example 2, ux and us correspond to Mary and swim respectively and w2 corresponds to saw. Thus, Mary swim wraps around saw to produce the verb phrase Mary saw swim as in the TAG GTAC and the HG GHG. As the previous two examples illustrate, there are two ways of drawing a correspondence between wrapping and adjoining,both of which can be applicable. However, only one of them is general enough to cover all situations, and is the one used in Sections 2 and 3 in discussing the weak equivalence. 5.3 Example 3 The normal notion of strong equivalence can not be used to discuss the relationship between the two formalisms, since HG's do not generate the standard phrase structure trees (from the derivation structure). However, it is possible to relate the analyses given by the two systems. This can be done in terms of the intermediate constituent structures. So far, in Examples 1 and 2 considered above we showed that the same analyses can be given in both the formalisms. We now present an example suggesting that this is not al- ways the case. There are certain constraints placed on ele- mentary trees: that they use meaningful elementary trees corresponding to minimal linguistic structures (for exam- ple, the verb and all its complements, including the subject complement are in the same elementary tree); and that the final tree must be a phrase-structure tree. As a result, TAG's can not give certain analyses which the HG's can provide, as evidenced in the following example. The example we use concerns analyses of John per- suaded Bill to leau,. We will discuss two analyses both of which have been proposed in the literature and have been independently justified. First, we present an analysis that can be expressed in both formalisms. The TAG has the following two elementary trees. $ J iS\ AlP v P h/ I ~ V P J I /\ V NP 5 fro I I J Jol,,, f~'~ N I b'~ The derivation structure corresponding to this analysis that HG's can give is as follows. 5 LC2 A/P VP ~_ct N V AlP 5 However, Pollard[5] gives another analysis which has the following derivation structure. 73 LC~ NP VP z.l-I I J \ N VP Lcl t,/P 1 /\ I [oha g 5 fl I /\ I In this analysis the predicate persuade to leave is formed as an intermediate phrase. Wrapping is then used to derive the phrase persuade Bill to leave. To provide such an anal- ysis with TAG's, the phrase persuade to leave must appear in the same elementary tree. Bill must either appear in an another elementary tree or must be above the phrase persuade to leave if it appears in the same elementary tree (so that the phrase persuade to leave is formed first). It can not appear above the phrase persuade to leave since then the word order will not be correct. Alternatively, it can not appear in a separate elementary tree since no mat- ter which correspondence we make between wrapp!ng and adjoining, we can not get a TAG which has meaningful el- ementary trees providing the same analysis. Thus the only appropriate TAG for this example is as shown above. The significance of this constraint that TAG's appear to have (illustrated by Example 3) can not be assessed until a wider range of examples are evaluated from this point of view. 6 Conclusion This paper focusses on the linguistic aspects of the re- lationship between Head Grammars and Tree Adjoining Grammars. With the use of examples, we not only illus- trate cases where the two formalisms make similar analy- ses, but also discuss differences in their descriptive power. Further empirical study is required before we can deter- mine the significance of these differences. We have also briefly studied the consequences of the notational differ- ences between the formalisms. A more detailed analysis of the linguistic and computational aspects of these differ- ences is currently being pursued. References [1] Joshi, A. K., Levy, L. S., and Takahashi, M. Tree Adjunct Grammars. Journal of Computer and System Sciences 10(1), March, 1975. [2] Joshi, A. K. How Much Context-Sensitivity is Neces- sary for Characterizing Structural descriptions - Tree Adjoining Grammars. In D. Dowty, L. Karttunen and Zwicky, A. (editors), Natural Language Processing - Theoretical, Computational and Psychological Perspec- tive. Cambridge University Press, New York, 1985. originally presented in 1983. [3] Joshi, A. K., Vijay-Shanker, K., and Weir, D.J. Tree Adjoining Grammars and Head Grammars. Techni- cal Report MS-CIS-86-1, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, January, 1986. [4] Kroch, A. and Joshi, A. K. Linguistic Relevance of Tree Adjoining Grammars. Technical Report MS-CIS-85- 18, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, April, 1985. also to appear in Linguistics and Philosophy, 1986. [5] Pollard, C. Generalized Phrase Structure Grammars, Head Grammars and Natural Language. PhD thesis, Stanford University, August, 1984. [6] Roach, K. Formal Properties of Head Grammars. 1985. Presented at Mathematics of Language workshop at the University of Michigan, Ann Arbor. [7] Rounds, W. C. LFP: A Logic for Linguistic Descrip- tions and an Analysis of its Complexity. September, 1985. University of Michigan. [8] Vijay-Shanker, K. and Joshi, A. K. Some Compu- tational Properties of Tree Adjoining Grammars. In 23 rd meeting of Assoc. of Computational Linguistics, pages 82-93. July, 1985. [9] Vijay-Shanker, K., Weir, D. J., and Joshi, A. K. Tree Adjoining and Head Wrapping. In 11 th International Conference on Computational Linguistics. August, 1986. 74
1986
11
CATEGORIAL AND NON-CATEGORIAL LANGUAGES Joyce Friedman Ramarathnam Venkatesan ABSTRACT Computer Science Department Boston University 111 Cummington Street Boston, Massachusetts 02215 USA PREL1MIN A R IES We study the formal and linguistic proper- ties of a class of parenthesis-free categorial grammars derived from those of Ades and Steed- man by varying the set of reduction rules. We characterize the reduction rules capable of gen- erating context-sensitive languages as those having a partial combination rule and a combination rule in the reverse direction. We show that any categorial language is a permutation of some context-free language, thus inheriting properties dependent on symbol counting only. We compare some of their properties with other contem- porary formalisms. INTRODUCTION Categorial grammars have recently been the topic of renewed interest, stemming in part from their use as the underlying formalism in Montague grammar. While the original categorial grammars were early shown to be equivalent to context-free grammars, 1, 2, 3 modifications to the formalism have led to systems both more and less powerful than context-free grammars. Motivated by linguistic considerations, Ades and Steedman 4 introduced categorial grammars with some additional cancellation rules. Full cancellation rules correspond to application of functions to arguments. Their partial cancellation rules correspond to functional composition. The new backward combination rule is motivated by the need to treat preposed elements. They also modified the formalism by making category symbols parenthesis-free, treating them in general as governed by a convention of association to the left, but violat- ing this convention in certain of the rules. This treatment of categorial grammar suggests a family of eategorial systems, differing in the set of can- cellation rules that are allowed. Earlier, we began a study of the mathematical properties of that family of systems, s showing that some members are fully equivalent to context-free grammars, while others yield only a subset of the context-free languages, or a super- set of them. In this paper we continue with these investigations. We characterize the rule systems that can obtain context-sensitive languages, and compare the sets of categorial ]ar~guages with the context-free languages. Finally, we discuss the linguistic relevance of these results, and compare categorial grammars with TAG systems i, this regard. A categorial grammar under a set R of reduction rules is a quadruple CGR (VT, VA, S, F), whose ele- ments are defined as follows: VT is a finite set of mor- phemes. VA is a finite set of atomic category symbols. S EVA is a distinguished element of VA. To define F, we must first define CA, the set of category symbols. CA is given by:i) ifAEVA,thenA ECA;ii) ifX EUA and A EVA, then X/A ECA; andiii) nothing elselsin CA . F is the lexicon, a function from VT to 2 ea such that for every aEVT, F(a) is finite. We often write CGR to denote a categorial grammar with rule set R, when the elements of the quadruple are known. Notation: Morphemes are denoted by a, b; mor- pheme strings by u,v,w. The symbols S,A,B,C denote atomic category symbols, and U. V, X, Y denote arbitrary (complex) category symbols. Complex category symbols whose left-most symbol is S (symbols "headed" by S) are denoted by Xs, Ys. Strings of category symbols are denoted by z, y. The language of a categorial grammar is determined in part by the set R of reduction rules. This set can include any subset of the following five rules. In each statement, A EVA, and U/A,A/U,A/V, VIA E CA. (1) (F Rule) The string of category symbols U/A A can be replaced by U. We write: U/A A---*U; (2) (FP Rule) The string U/A A/V can be replaced by U /V. Wewrite: U /A A/V-*U/V; (3) (B Rule) The string A V/A can be replaced by U. We write:A U/A~U; (4) (Bs Rule) Same as B rule, except that U is headed by S. (5) (BP Rule) The string A/U V/A can be replaced by V/U. We write: A/U V/A--*V/U. If XY ---,Z by the F-rule , XY is called an F-redex. Similarly, for the other four rules. Any one of them may simply be called a redex. The reduction relation determined by a subset of these rules is denoted by => and defined by: if X Y --* Z by one of the rules of R, then for any a, /~ in CA* , aXY/3 >aZ/3. The reflexive and transitive closure of the relation -> is =>*. A morpheme string w=wlu,~" "'w, is accepted by CGR(VT, VA,S,F) if there is a category string z = X1X2 "" • X, such that XiEF(w,) for each i=l,2,'--n, and x =>* S. The language L(CGR) accepted by CGR(VT, VA,S,F) is the set of all morpheme strings that are accepted. 75 I. NON-CONTEXT-FREE CATEGORIAL LANGUAGES In this section we present a characterization theorem for the categorial systems that generate only context-free languages. First, we introduce a lexicon FEQ that we will show has the property that for any choice R of metarules any string in L(CGR) has equal numbers of a,b, and c. We define the lexicon FEQ as FEQ (a ) = {A }, FEQ(b) = {BI, F~Q(c) ={C/A/C/B, C/D}, FEQ (d ) {D}, FEQ(e)={S/A/C/B}. We will also make use of two languages on the alphabet {a,b,e,d, e} Ll={a"db "e c ~ In >/1 },and LEQ = {w ! #a = #b = #c >1 1,#d =#e = 1}. A lemma shows that with any set R of rules the lex- icon FEQ yields a subset of LEQ. Lemma 1 Let G be -any categorial grammar, CGR(VT,VA,S,FEQ), where VT ={a,b,c,d,e}, VA = {S,A,B,C,D}, with R~{F,FP,B,BP}. Then L (C)CL~Q. Proof Let z = X IX 2...X~ = > *S. Let w = wl...w. be a corresponding morpheme string. To differentiate between the occurrence of a symbol as a head and otherwise, write C/A/C/B = CA -1C-1B-1' S /A /C /B = SA-1C-1B -1 and C /D = CD -1. For any rule system R, a redex is two adjacent categories, the tail of one matching the head of the other, and is reduced to a single category after cancelling the matching symbols. Since all occurrences of A must cancel to yield a reduction to S, #A = #A -1. This holds for all atomic categories except S, for which #S = #S-l+l. This lexicon has the property that any derivable category symbol, either has exactly one S and is S- headed or does not have an occurrence of S. Hence in x, #S = 1, i.e., w has exactly one e. Let the number of occurrences in x of C/A/C/B and C/D be p and q respectively. ]t follows that #C = p +q, #C -1 = p +1. Hence q = 1 and w ha.~ exactly one d. Each occurrence of C/A/C/B introduces oneA-landB-1. Sincew has one e, #A-1 = #B-J = p +1. Hence #A = #B = p +1. Since for each A ,B and C in z there must be exactly onea,b and c,#a =#b =#c. [] We show next that in the restricted ease where R contains only the two rules FP and B s , the language L 1 is obtained. Lemma 2 Let CG R be the categorial grammar with lexi- con FEQ and rule set R = {FP ,Bs }. Then L (CGR ) = L1. Proof Any x EL 1 has a unique parse of the form (Bs FP ) n Bs Bs ~, and hence L 1CL (CG R ). Conversely, any x having a parse must have exactly one e. Further, all b's and c's can appear only on the left and right of e respectively. Any derivable category having an A has the form S/(A/)" U where U does not have any A. Thus all A's appear consecutively on the left of the e. For the rightmost e,F(c) = C/D. A d must be in between a's and b's. By lemma 1, #(a)=#(b) =# (c). Thus x = a n db n ec" , for some n. Hence L 1 = L (CGR). [] The next lemma shows that no language intermediate to L1 and LEQ can be context-free. It really does not involve eategorial grammar at all. Lemma 3 If L 1C.L C-LEQ, then L is not context-free. Proof Suppose L is context-free. Since L contains L1, it has arbitrarily long strings of the form a '~ b db"e c". Let k and K be pumping lemma con- stants. Choose n >max(K,k). This string, if pumped, yields a string not in LEQ, hence we have a contradiction. [] Corollary Let {FP ,Bs }~R. Then there is a non- context-free language L ( CGR ). Proof Use the lexicon FEQ. Then by lemma 1 L(CGR)~LEQ. But{FP,Bs}~R,soLI~L(CGR). [] The following theorem summarizes the results by characterizing the rule sets that can be used to generate context sensitive languages. Main Theorem A categorial system with rule set R can generate a context-sensitive language if and only if R contains a partial combination rule and a combination rule in the reverse direction. Proof The "if" part follows for {FP,Bs }by lemmas 1, 2, and 3. It follows for {BP ,F } by symmetry. For the "only if" part, first note that any unidirectional system (system with rules that are all forward, or all backward) can generate only context-free languages. 5 The only remaining cases are {F ,B } and {FP ,BP 1. The first gen- erates only context free languages. 5 The second generates only the empty language, since no atomic symbol can be derived using only these two rules. II. CATEGORIAL LANGUAGES ARE PERMUTA- TIONS OF CONTEXT-FREE LANGUAGES Let VT = {a l, a2 "-.,ak }. A Parikh mapping 6 v/is a mapping from morpheme strings to vectors such that x~(w) = (#al,#a2 ..... #a k). u is a permutation of v iff ~(u)=~(v). Let ~P(L~={W(w)IwEL}, A language L is a permutation of L iff ~(L ) = xC(L). We define a rotation as follows. In the parse tree for u E L, at any node corresponding to a B redex or BP-redex exchange its left and right subtrees, obtaining an F-redex or an FP-redex. Let v the resulting terminal string. We say that u has been transformed into v by rotation. We now obtain results that are helpful in showing that certain languages eannol be generated by. categorial grammars. First we show that, every categorial language is a permutation of a context free language. This will enable us to show that properties of context-free languages that depend only on the symbol counts must also hold of categorial languages. Theorem Let R c: {F, FP, B, BP}. Then there exists a LCF such that ¢(L (CGR)) = ¢(LcF), where LcF is context free. Proof Let x eL (CGR). In its parse tree at each node corresponding to a B-redex or a BP-redex perform a rotation, so that it becomes a F -redex or a FP -redex. Since the transformed string y is obtained by rearranging the parse tree, xt,(x)= ~(y ). Also y derivable using R I = {FP ,F } only. Hence the set of such y obtained as a permutation of some x is the same as L (CGRt), which is context free, 5 i.e., L ( CGR I) = LCF . [] 76 Corollary For any R ~ {F, FP, B, BP}, L (CGR) is semilinear , Parikh bounded and has the linear growth property. Semilinearity follows from Parikh's Lemma and linear growth from the pumping lemma for context-free languages. Parikh boundedness follows from the fact that any context-free language is Parikh bounded. 6 I-1 Proposition Any one--symbol categorial grammar is reg- ular. Note that if L is a semilinear subset of nonnegative integers, {a n In eL } is regular. III. NON-CATEGORIAL LANGUAGES We now exhibit some non-categorial languages and compare eategorial languages with others. From the corol- lary of the previous section we have the following results. Theorem Categorial languages are properly contained in the context-sensitive languages. Proof The languages {a h (n) [ n >/0 }, where h (n)=n 2 or h (n)=2" which do not have linear growth rate, are not generated by any CGR. These are context sensitive. Also{arab" I either m>n ,grin is prime and n ~<m and m is prime } is not semilinear 7 and hence not categorial. It is interesting to note that lexieal functional gram- mar can generate the first two languages mentioned above 8 and indexed languages can generate {a nbn2a ~' In>tl}. Linguistic Properties We now look at some languages that exhibit cross- serial dependencies. Let G3 be the CGR with R ={FP,Bs}, VT = {a ,b ,c ,d }, and with the lexicon FFI~I =IS~S1}'= {S lIB/S 1,F(c)={S1}'B }. F(a)=lS1/a/sl, m},Then L3 = L (G3) = {wcdw tw E{a,b}*}. The reasoning is similar to that of lemma 1. First #c = #d = 1, from #S = 1. Since we have Bs rule, c occurs on the left of d and all occurrences of a and b on the left of c get assigned A and B respectively. Similarly all a and b on the right of c, get assigned to the complex category as defined by F. It follows that all symbols to the right of d get combined by FP rule and those on the left by Bs rule. Hence a symbol occurring n symbols to the right of d must be matched by an occurrence n symbols to the right of the left-most symbol. For any k, let G4(k) be the CGR with R = {FP ,Bs } again, VT = {al ,hi ] 1 <~ i ~k } U {ci I1 ~<i <k} O {d,e}, and the lexicon F(b,) ={s,/ai/s,}, F(al) =[A,},l<~ i <~k, F(e,) ={S,/S,+I},I <i < k, F(d) ={Sk}, F (e) = {S/S a}. Then L (G,(k)) = lal"~a2 "2 --- a~"kdebl"'cx ' ek-~ bk"kJ for any k. Note that #A i = #Ai -a. This implies #b i = #a i . The rest of the argument parallels that for L3 above . Thus {FP, Bs } has the power to express unbounded cross-serial dependencies. Now we can compare with Tree Adjoining Grammars (TAG). s A TAG without local constraints cannot generate L3. A TAG with local constraints can generate this, but it cannot generate L6 = {am b" c m d" ] m,n >-1}. L4(2) can be transformed into L6 by the homomorphism erasing ca,d and e. TAG languages are closed under homomor- phisms and thus the categorial language L4(2) is not a TAG language. TAG languages exhibit only limited cross serial dependencies. Thus, though TAG Languages and CG languages share some properties like linear growth, semilinearity, generation of all context-free languages, limited context sensitive power, and Parikh boundedness, they are different in their generative capacities. Acknowledgements We would like to thank Weiguo Wang and Dawei Dai for helpful discussions. References 1. Yehoshua Bar-Hillel, "On syntactical categories," Journal of Symbolic Logic, vol. 15 , pp. 1-16 , 1950. Reprinted in Bar-Hillel (1964), pp. 19-37. 2. Haim Gaifman, Information and Control, vol. 8, pp. 304-337, 1965. 3. Yehoshua Bar-Hillel, Language and Information, Addison-Wesley, Reading, Mass., 1964. 4. Anthony E. Ades and Mark J. Steedman, "On the order of words," Linguistics and Philosophy, vol. 4, pp. 517-558, 1982. 5. Joyce Friedman, Dawei Dai, and Weiguo Wang, "Weak Generative Capacity of Parenthesis-free Categorial Grammars," Technical Report #86-1, Dept. of Computer Science, Boston University, 1986. 6. Meera Blattner and Michel Latteux, "Parikh- Bounded Languages," in Automata, Languages and Programming, LNCS 115, ed. Shimon Even and Oded Kariv, Springer-Verlag, 1981. 7. Harry R. Lewis and Christos H. Papadimitriou, Ele- ments of the Theory of Computation, Prentice- Hall, 1981. 8. Aravind K. Joshi, "Factoring reeursion and depen- dencies: an aspect of tree adjoining grammars and a comparison of some formal properties of TAGs, GPSGs, PLGs and LFGs," 21st Ann. Meeting of the Assn. for Comp. Linguistics, 1983. 77
1986
12
PARSING CONJUNCTIONS DETERMINISTICALLY Donald W. Kosy The Robotics Institute Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 ABSTRACT Conjunctions have always been a source of problems for natural language parsers. This paper shows how these problems may be circumvented using a rule.based, walt-and-see parsing strategy. A parser is presented which analyzes conjunction structures deterministically, and the specific rules it uses are described and illustrated. This parser appears to be faster for conjunctions than other parsers in the literature and some comparative timings are given. INTRODUCTION In recent years, there has been an upsurge of interest in tech- niques for parsing sentences containing coordinate conjunctions (and, or and but) [1,2,3,4,5,8,9]. These techniques are intended to deal with three computational problems inherent in conjunc- tion parsing: 1. Since virtually any pair of constituents of the same syntactic type may be conjoined, a grammar that ex- plicitly enumerates all the possibilities seems need- lessly cluttered with a large number of conjunction rules. 2. If a parser uses a top-down analysis strategy (as is common with ATN and logic grammars), it must hypothesize a structure for the second conjunct with- out knowledge of its actual structure. Since this structure could be any that parallels some con- stituent that ends at the conjunction, the parser must generate and test all such possibilities in order to find the ones that match. In practice, the combinatorial explosion of possibilities makes this slow. 3. It is possible for a conjunct to have "gaps" (ellipsed elements) which are not allowed in an unconjoined constituent of the same type. These gaps must be filled with elements from the other conjunct for a proper interpretation, as in: I gave Mary a nickel and Harry a dime. The paper by Lesmo and Torasso [9] briefly reviews which tech. niques apply to which problems before presenting their own ap- proach. Two papers in the list above [1,3] present deterministic, "wait. and-see" methods for conjunction parsing. In both, however, the discussion centers around the theory and feasibility of parsers that obey the Marcus determinism hypothesis [10] and operate with a limited-length Iookahead buffer. This paper examines the other side of the coin, namely, the practical power of the wait- and.see approach compared to strictly top-down or bottom-up methods. A parser is described that analyzes conjunction struc. tures deterministically and produces parse trees similar to those produced by Dahl & McCord's MSG system [4]. It is much faster than either MSG or Fong & Berwick's RPM device [5], and com- parative timings are given. We conclude with some descriptive comparisons to other systems and a discussion of the reasons behind the performance observed. OVERVIEW OF THE PARSER For the sake of a name, we will call the parser NEXUS since it is the syntactic component of a larger system called NEXUS. This system is being developed to study the problem of learning tech. nical concepts from expository text. The acronym stands for Non.Expert Understanding System. NEXUS is a direct descendent of READER, a parser written by Ginsparg at Stanford in the late 1970's [6]. Like all wait-and-see parsers, it incorporates a stack to hold constituent structures being built, some variables that record the state of the parse, and a set of transition rules that control the parsing process. The stack structures and state variables in NEXUS are almost the same as in READER, but the rules have been rewritten to make them cleaner, more transparent, and more complete. There are two categories of rules. Segmentation rules are responsible for finding the boundaries of constituents and creat- ing stack structures to store these results. Recombination rules are responsible for attaching one structure to another in syntac- tically valid ways. Segmentation operations are separate from, and always precede, recombination operations. All the rules are encoded in Lisp; there is no separate rule interpreter. Segmentation rules take as input a word from the input sen. tence and a partial-parse of the sentence up to that word. The rules are organized into procedures such that each procedure implements those rules that apply to one syntactic word class. When a rule's conditions are met, it adds the input word to the partial-parse, in a way specified in the rule, and returns the new partial-parse as output. A partial-parse has three parts: 1. The stack: A stack (not a tree) of the data structures which encode constituents. There are two types of structures in the stack, one type representing clause nuclei (the verb group, noun phrase arguments, and adverbs of a clause), and the other representing prepositional phrases. Each structure consists of a collection of slots to be filled with constituents as the parse proceeds. 2. The message (MSG): A symbol specifying the last action performed on the stack. In general, this sym- bol will indicate the type of slot the last input word 78 was inserted in. 3. The stack-message (MSGI): A list of properties of the stack as a whole (e.g. the sentence is imperative). The various types of slots comprising stack structures are defined in Figure 1. VERB, PREP, ADV, NOTE, and FUNCTION slots are i filled during segmentation, while CASES and MEASURE slots are added during recombination. NP slots are filled with noun phrases during segmentation but may subsequently be aug- mented by post-modifiers during recombination. CLAUSES PREPOSITION STRUCTURES VERB: verb phrase ADV: adverbs NP1,NP2,NP3: noun phrases NOTE: notes FUNCTION: clause function MEASURE: rating CASES: adjuncts PREP: preposition ADV: adverbs NP: noun phrase NOTE: notes MEASURE: rating DEFINITIONS Clause function Hypothesized role of the clause in the sentence, e.g. main, relative clause, infinitive adjunct, etc. Notes Segmentation rules can leave notes about a structure that will be used in ,later processing. Rating A numerical measure of the syntactic and semantic acceptability of the structure to be used in choosing between competing possible parses. Adjuncts The prepositional phrases and subordinate clauses that turn out to be adjuncts to this clause. Figu re 1 : Stack Structures An English rendering of some segmentation rules for various word classes is given in the Appendix. The tests in a rule depend on the current word, the messages, and various properties of structures in the/stack at the time the tests are made. As each word is taken fi'om the input stream, all rules in its syntactic class(es) are tried, in order, using the current partial parse. All rules that succeed are executed. However, if the execution of some rule stipulates a return, subsequent rules for that class are ignored. The actions a rule can take are of five main types. For a given input word W, a rule can: • continue filling a slot in the top stack structure by inserting W • begin filling a new slot in the top structure • push a new structure onto the stack and begin filling one of its slots • collapse the stack so that a structure below the top becomes the new top • modify a slot in the top structure based on the infor- mation provided by W In addition, a rule will generally change the MSG variable, and may insert or delete items in the list of stack messages. The way the rules work is best shown by example. Suppose the input is: The children wore the socks on their hands. The segmentation NEXUS performs appears in Fig. 2a. On the left are the words of the sentence and their possible syntactic classes. The contribution each word makes to the development of the parse is shown to the right of the production symbol "= ~>". We will draw the stack upside down so that successive parsing states are reached as one reads down the page. The contents of a stack structure are indicated by the accumulation of slot values between the dashed-line delimiters ("--.-."). Empty slots are not shown. Input Word Word Class MSG1 MSG Stack -- - nil BEGIN FUNCTION: MAIN the A => nil NOUN NPI: the children N = > nil NOUN NPI': the children wore V = > nil VERB VERB: wore the A = > nil NOUN NP2: the socks N,V => nil NOUN NP2': thesocks on P = ;> nil PREP PREP: on their N = > nil NOUN NP: their hands N,V => nil NOUN NP': theirhands a. Segmentation {wear PN [SUB the children] the socks] their hands] } b. Recombination Figure 2: Parse of The children wore the socks on their hands Before parsing begins, the three parts of a partial-parse are initialized as shown on the first line. One structure is prestored in the stack (it will come to hold the main clause of the input sentence), the message is BEGIN, and MSG1 is empty. The pars- ing itself is performed by applying the word class rules for each input word to the partial-parse left after processing the previous word. For example, before the word wore is processed, MSG = NOUN, MSG1 is empty, and the stack contains one clause with FUNCTION = MAIN and NP1 = the children. Wore is a verb and so the Verb rules are tried. The third rule is found to apply since there is a clause in the stack meeting the conditions. This clause is the top one so there is no collapse. (Collapse performs recombination and is described below.) The word wore is in. serted in the VERB slot, MSG is set, and the rule returns the new partial.parse. It is possible for the segmentation process to yield more than one new partial-parse for a given input word. This can occur in two ways. First, a word may belong to several syntactic classes "79 and when this is so, NEXUS tries the rules for each class. If rules in more than one class succeed, more than one new partial-parse is produced. As it happens, the two words in the example that are both nouns and verbs do not produce more than one partial- parse because the Verb rules don't apply when they are processed. Second, a word in a given class can often be added to a partial.parse in more than one way. The third and fifth Verb rules, for example, may both be applicable and hence can produce two new partial.parses. In order to keep track of the possibilities, all active partial.parses are kept in a list and NEXUS adds new words to each in parallel. The main segmentation con- trol loop therefore has the following form: For each word w in the input sentence do For" each wor"d class C that w belongs to do For" each partial parse P in the list do Try the C rules given w and P Loop Loop Store all new partial-parses in the list Loop In contrast to segmentation rules, which add structures to a partial.parse stack, recombination rules reduce a stack by joining structures together. These rules specify the types of attachment that are possible, such as the attachment of a post-modifier to a noun phrase or the attachment of an adjunct to a clause. The successful execution of a rule produces a new structure, with the attachment made, and a rating of the semantic acceptability of the attachment. The ratings are used to choose among different attachments if more than one is syntactically possible. There are three rating values -- perfect, acceptable, and un- acceptable .- and these are encoded as numbers so that there can be degrees of acceptability. When one structure is attached to another, its rating is added to the rating of the attachment and the sum becomes the rating of the new (recombined) structure. A structure's rating thus reflects the ratings of all its component constituents. Although NEXUS is designed to call upon an inter. preter module to supply the ratings, currently they must be sup- plied by interaction with a human interpreter. Eventually, we ex- pect to use the procedures developed by Hirst [7]. There is also a 'no-interpreter' switch which can be set to give perfect ratings to clause attachment of right-neighbor prepositional phrases, and noun phrase ("low") attachment of all other post-modifiers. The order in which attachments are attempted is controlled by the col]apse procedure. Collapse is responsible for assem- bling an actual parse tree from the structures in a stack. After initializing the root of the tree to be the bottom stack structure, the remaining structures are considered in reverse stack order so that the constituents will be added to the tree in the order they appeared (left to right). For each structure, an attempt is made to attach it to some structure on the right frontier of the tree, starting at the lowest point and proceeding to the highest. (Looking only at the right frontier enforces the no-crossing condition of English grammar. 1 ) If a perfect attachment is found, no further pos- sibilities are considered. Otherwise, the highest-rated attachment is selected and co11 apse goes on to attach the next structure. If no attachment is found, the input is ungrammatical with respect to the specifications in the recombination rules. 1The no-crossing condition says that one constituent cannot be attached to a non-neighboring constituent without attaching the neighbor first. For instance, if constituents are ordered A, B, and C, then C cannot be attached to A unless B is attached to A first. Furthermore, this implies that if B and C are both attached to A, B is closed to further attachments. After a stack has been collapsed, a formatting procedure is called to produce the final output. This procedure is primarily responsible for labeling the grammatical roles played by NPs and for computing the tense of VERBs. It is also responsible for in- serting dummy nouns in NP slots to mark the position of "wh. gaps" in questions and relative clauses. Figure 2b shows the tree NEXUS would derive for the ex- ample. The code PN indicates past tense, and the role names should be self-explanatory. During collapse, the interpreter would be asked to rate the acceptability of each noun phrase by itself, the acceptability of the clause with the noun phrases in it, and the acceptability of the attachment. The former ratings are necessary to detect mis.segmented constituents, e.g., to downgrade "time flies" as a plausible subject for the sentence Time flies like an arrow. By Hirst's procedure, the last rating should be perfect for the attachment of the on.phrase to the clause as an adjunct since, without a discourse context, there is no referent for the socks on their hands and the verb wear ex- pects a case marked by on. CONJUNCTION PARSING To process and and or, we need to add a coordinate conjunc- tion word class (C) and three segmentation rules for it. 2 1. If MSG = BEGIN, Push a clause with FUNCTION = w onto stack. Set MSG = CONJ and return. 2. If the topmost nonconjunct clause in the stack has VERB filled, Push a clause with FUNCTION = w onto stack. Set MSG = CONJ and return. 3. Otherwise, Push a preposition structure with PREP = w onto stack. Set MSG = PREP and return. The first rule is for sentence-initial conjunctions, the second for potential clausal conjuncts and the third is for cases where the conjunction cannot join clauses. This last case arises when noun phrases are conjoined in the subject of a sentence: John and Mary wore socks. Note that the stack structure for a noun phrase conjunct is identical to that for a prepositional phrase. To handle gaps, we also need to add one rule each to the Noun and Verb procedures. For Verb, the rule is: 4. If MSG = CON J, Set NP1 = !sub, VERB = w in top structure, Set MSG = VERB and return. For Noun: 5. If the top structure S is a clause conjunct with NP1 filled but no VERB and there is another clause C in the stack with VERB filled and more than one NG filled, Copy VERB filler from C to S's VERB slot If C has NP3 filled, Transfer S's NP1 to NP2 and set S's NP1 =/sub. Insert w as new NG in S. Set MSG = NOUN and return. In both rules, !sub is a dummy placeholder for the subject of the 2The conjunction but is not syntactically interchangeable with and and or since but cannot freely conjoin noun phrases: =John but Mary wore aock$. The rules for but have not yet been developed. 80 clause. Rule 4 is for verbs that appear directly after a conjunction and rule 5 is for transitive or ditransitive conjuncts with gapped verb. To specify attachments for conjuncts, we need some recom- bination rules. In general, elements to be conjoined must have very similar syntactic structure. They must be of the same type (noun phrase, clause, prepositional phrase, etc.). If clauses, they must serve the same function (top level assertion, infinitive, rela- tive clause, etc.), and if non-finite clauses, any ellipsed elements (wh-gaps) must be the same. If these conditions are met, an attachment is proposed. Additionally, in three situations, a recombination rule may also modify the right conjunct: 1. A clause conjunct without a verb can be proposed as a noun phrase conjunct. 2. A clause conjunct without a verb may also be proposed as a gapped verb, as in: Bob saw Sue in Paris and [Bob saw] Linda in London. 3. When constituents from the left conjunct are ellipsed, they may have to be taken from the right conjunct, as in the famous sentence: John drove through and completely demolished a plate glass window. This transformation is actually implemented in the final formatting procedure since all of the trailing cases in the right conjunct must be moved over to the left con- junct if any such movement is warranted. Since all these situations are structurally ambiguous, the inter- preter is always called to rate the modifications. In situation 2, for instance, it may be that there is no gap: Bob saw Sue in [Paris and London] in the spring of last year. In situation 3, the gapped element might come from context, rather than the right conjunct: Ignoring the stop sign at the intersection, John drove through and completely demolished his reputation as a safe driver. Hence, only interpretation can determine which choice is most ap- propriate. Let us now examine how these rules operate by tracing through a few examples. First, suppose the sentence from the previous section were to continue with the words "and their feet". Rule 2 would respond to the conjunction, and the rest of the segmentation would be: Input Word Word Class MSG1 MSG Stack and C = > nil CONJ FUNCTION: AND their N = > nil NOUN NP1 : their feet N = > nil NOUN NP1 ': their feet Thus, the noun rules would do what they normally do in filling the first NP slot in a clause structure. If the sentence ended here, recombination would conjoin the last two noun phrases, "their hands" end "their feet", as the complement of on, producing: {wear PN f SUB the children] OBJ the socks] ON their hands (AND their feet)] } If, instead, the sentence did not end but continued with a verb -- "froze", say .- the segmentation would continue by adding this word to the VERB slot in the top structure, which is open. As before, the rules would do what they normally do to fill a slot. Recombination would yield conjoined clauses: {wear PN rUB the children] OBJ the socks] _ ON their hands] AND (V freeze PN [SUB their feet]) } Notice that the second clause is inserted as just another case adjunct of the first clause. There is really no need to construct a coordinate structure (wherein both clauses would be dominated by the conjunction) since it adds nothing to the interpretation. Moreover, as Dahl & McCord point out [4], it is actually better to preserve the subordination structure because it provides essen- tial information for scoping decisions. Now we move on to gaps. Consider a new right conjunct for our original example sentence in which the subject is ellipsed: The children wore the socks on their hands ~nd froze their feet. Rule 4 would detect the gap and the resulting segmentation would be: Input Word Word Class MSG1 MSG Stack and C = > nil CONJ FUNCTION: AND froze V = > nil VERB NPI: /sub VERB: froze their N = > nil NOUN NP2: their feet N = ) nil NOUN NP2': their feet Recombination would yield conjoined clauses with shared sub- ject: {wear PN ISUB the children] OBJ the socks] ON their hands] AND (V freeze PN SUB/sub] _ OBJ their feet]) } The appearance of/sub in the second SUB slot tells the inter- preter that the subject of the right conjunct is ¢creferential with the subject of the left conjunct. Finally, to illustrate rule 5, consider the sentence: The children wore the socks on their hands and John a lampshade on his head. When the parser comes to "a", rule 5 applies, the verb wore is copied over to the second conjunct, and "a" is inserted into NP2. Thus, the segmentation of the conjunct clause looks like this: Input Word Word Class MSG1 MSG Stack and C = > nil CONJ FUNCTION: AND John N = ;> nil NOUN NPI: John a A = > nil VERB: wore NOUN NP2: s lampshade N = > nil NOUN NP2': a lampshade on P => nil PREP PREP: on his N = > nil NOUN NP: his head N,V => nil NOUN NP': hishead Recombination would produce the conjunction of two complete clauses with no shared material. 8] RESULTS Using the rules described above, NEXUS can successfully parse all the conjunction examples given in all the papers, with two exceptions. It cannot parse: • conjoined adverbs, e.g., Slowly and stealthily, he crept toward his victim. • embedded clausal complement gaps, e.g., Max wants to try to begin to write a novel and Alex a play. The problem with these forms lies not so much in the conjunction rules as in the rules for adverbs and clausal complements in general. These latter rules simply aren't very well developed yet. It is instructive to compare the NEXUS parser to that of Lesmo & Toraseo. Like theirs, NEXUS solves the first problem men- tioned in the introduction by using transition rules rather than a more conventional declarative grammar. Also like theirs, NEXUS solves the third problem by means of special rules which detect gaps in conjuncts and which fill those gaps by copying con- stituents from the other conjunct. Unlike theirs, however, NEXUS delays recombination decisions as long as it can and so does not have to search for possible attachments in some situations where theirs does. For instance, in processing Henry repeated the story John told Mary and Bob told Ann his opinion. their parser would first mis.attach [and Bob] to [Mary], then mis- attach [and Bob told Ann] to [John told Mary]. Each time, a search would be made to find a new attachment when the next word of the input was read. NEXUS can parse this sentence successfully without any mis-attachments at all. It is also instructive to compare NEXUS to the work of Church. His thesis [3] gives a detailed specification of a some fairly elegant rules for conjunction (and several other constructions) along with their linguistic and psycholinguistic justification. While most of the rules are not actually exhibited, their specification suggests that they are similar in many ways to those in NEXUS. However, Church was primarily concerned with the implications of determinism and limited memory, and so his parser, YAP, does not defer decisions as long as NEXUS does. Hence, YAP could not find, or ask for resolution of, the ambiguity in a sentence like: I know Bob and Bill left. YAP parses this as [I know Bob] and [Bill left]. NEXUS would find both parses because the third and fifth verb rules both apply when the verb left is processed. Note that these two parses are required not because of the conjunction, but because of the verb know, which can take either a noun phrase or a clause as its object. Only one parse would be needed for unambiguous variations such as I know that Bob and Bill left and I know Bob and Bill knows me. In general, the conjunction rules do not introduce any additional nondeterminism into the grammar beyond that which was there already. With respect to efficiency, the table below gives the execution times in milliseconds for NEXUS's parsing of the sample sen- tences tabulated in [5]. For comparison, the times from [5] for MSG and RPM are also shown. All three systems were executed on a Dec.20 and the times shown for each are just the time taken to build parse trees: time spent on morphological analysis and post-parse transformations is not included. MSG and RPM are written in Prolog and NEXUS is written in Maclisp (compiled). NEXUS was run with the 'no-interpreter' switch turned on. Sample Sentences MSG RPM NEXUS Each man ate an apple and a pear. 662 292 112 John ate an apple and a pear. 613 233 95 A man and a woman saw each train. 319 506 150 Each man and each woman ate an apple. 320 503 129 John saw and the woman heard a man that laughed. 788 834 275 John drove the car through and completely demolished a window. 275 1032 166 The woman who gave a book to John and drove a car through a window laughed. 1007 3375 283 John saw the man that Mary saw and Bill gave a book to laughed. 439 311 205 John saw the man that heard the woman that laughed and saw Bill. 636 323 289 The man that Mary saw and heard gave an apple to each woman. 501 982 237 John saw a and Mary saw the red pear. 726 770 190 In all cases, NEXUS is faster, and in the majority, it is more that twice as fast as either other system. Averaging over all the sentences, NEXUS is about 4 times faster than RPM and 3 times faster than MSG. CONCLUSIONS The most innovative feature in NEXUS is its use of only two kinds of stack structures, one for clauses and one for everything else. When a structure is at the top of the stack, it represents a top.down prediction of constituents yet to come, and words from the input simply drop into the slots that are open to that class of word. When a word is encountered that cannot be inserted into the top structure nor into any structure lower in the stack, a new structure is built bottom-up, the new word inserted in it, and the parse goes on. When a word can both be inserted somewhere in the stack and also in a new structure, all possible parses are pursued in parallel. Thus, NEXUS seems to be a unique member of the wait-and-see family since it is not always deterministic and hence need not disembiguate until all information it could get from the sentence is available. The general efficiency of the parser is due primarily to its separation of segmentation from recombination. This is a divide and conquer strategy which reduces a large search space -- grammatical patterns for words in sentences -- into two smaller ones: (1) the set of grammatical patterns for simple phrases and clause nuclei, and (2) the set of allowable combinations of stack structures. Of course, search is still required to resolve structural ambiguity, but the total number of combinations is much less. It is not clear whether the parser's speed in the particular cases above comes from divide and conquer or from the dif- ferences between Prolog and Maclisp. Nevertheless, as systems are built that require larger, more comprehensive grammars, and that must deal with longer, more complicated sentences, the ef- ficiency of wait-and-see methods like those presented here should become increasingly important. 82 REFERENCES [1] Berwick, R.C. (1983), "A Deterministic Parser With Broad Coverage," Proceedings of/JCA/8, Karlsruhe, W. Germany, pp. 710-712. [2] Boguraev, B.K. (1983), "Recognising Conjunctions Within the ATN Framework," in K. Sparck-Jones and Y. Wilks (eds.), Automatic Natural Language Parsing, Ellis Horwood. [3] Church, K.W. (1980), "On Memory Limitations in Natural Language Processing," LCS TR.245, Laboratory for Com- puter Science, MIT, Cambridge, MA. Dahl, V., and McCord, M.C. (1983), "Treating Coordination in Logic Grammars," American Journal of Computational Linguistics, V. 9, No. 2, pp. 69-91. [5] Fong, S, and Berwick, R.C. (1985), "New Approaches to Parsing Conjunctions Using Prolog," Proceedings of the 23rd ACL Conference, Chicago, pp. 118-126. [6] Ginsparg, J. (1978), Natural Language Processing in an Automatic Programming Framework, AIM-316, PhD. Thesis, Computer Science Dept., Stanford University, Stanford, CA. [7] Hirst, G. (in press), Semantic Interpretation and the Resolu- tion of Ambiguity, New York: Cambridge University Press. [8] Huang, X. (1984), "Dealing with Conjunctions in a Machine Translation Environment," Proceedings of COLING 84, Stan- ford, pp. 243-246. [9] Lesmo, L., and Torasso, P. (1985), "Analysis of Conjunctions in a Rule.Based Parser", Proceedings of the 23rd ACL Conference, Chicago, pp. 180-187. [10] Marcus, M. (1980), A Theory of Syntactic Recognition for Natural Language, Cambridge, MA.: The MIT Press. 83 APPENDIX: SAMPLE SEGMENTATION RULES WORD CLASS A: Article Go begin new np with current word w. M: Modifier If MSG = NOUN and LEGALNP(lastNP + w), Continue lestNP with w and return. Else, Go begin new np with w. N: Noun If MSG = NOUN & w = that and lastNP can take a relative clause, Push a clause with FUNCTION = THAT, NP1 = that onto stack. Set MSG = THAT and return. If MSG = NOUN or THAT & LEGALNP(laetNP + w), Continue lastNP with w. If MSG = THAT, set MSG = NOUN and return. If w is the only noun in lastNP, return. If the top clause in the stack haS no empty NP, retum. Beoin new no: if MSG = THAT, Replace NPt with w. Set MSG = NOUN and return. If there a clause C in the stack with NP empty & C is below a relative clause with VERB filled, Collapse stack down to C end insert w as now NP. Set MSG = NOUN. If the top structure in the stack has NP empty, Insert w as new NP. Set MSG = NOUN and return. If MSG = NOUN & lastNP can take a relative clause starting with w, Push a clause with FUNCTION = RC, NP1 = w onto stack. Set MSG = NOUN and return. If the topmost clause C in the stack has VERB filled, & C's VERB can take a clausal complement, Push a clause with FUNCTION = WHAT, NP1 = w onto stack. Set MSG = NOUN and return. WORD CLASS P: Preposition it w = to & next word is infinitive verb, Push a clause with FUNCTION = INF, NP1 =/sub onto stack. Set MSG = INF and return. Else, Push a preposition structure with PREP = w onto stack. Set MSG = PREP and return. V: Verb If MSG = BEGIN & w not inflected, Set NP1 = YOU', VERB = w, NOTE = IMP. Set MSG = VERB, insert IMP in MSG1, and retum. If MSG = VERB & LEGALVP(VERB + w), Continue VERB with w and return. If there is a clause C in the stack with NP1 filled & VERB empty & AGREES(w,NP1), if C not top structure in stack, collapse stack down to C. Set C's VERB = w and set MSG = VERB. If C is a subclause, return. If the top clause C in the stack has NP3 filled, If C not top structure in stack, collapse stack down to C. Push a clause with FUNCTION = THAT, VERB = w onto stack. Transfer C's NP3 to NP1 of new clause. Set MSG = VERB and return. if the topmost clause C with VERB filled can take a clause as NP2, If C not top structure in stack, collapse stack down to C. Push a clause with FUNCTION = WHAT, VERB = w onto stack. If C's NP2 is filled, transfer C's NP2 to NP1 of now clause. Set MSG = VERB and return. DEFINITIONS 1. The current input word is w. 2. The variable lastNP refers to the contents of the last NP ~Jot filled in the top structure, 3. The predicate LEGALVP tests whether ~s argument is s syntac- tically well.formed (partial) verb phrase (auxiliaries + verb). 4. The predicate LEGALNP tests whether its argument is a syntac- tically well-formed noun phrase (article + "modifiers + nouns). 5. The predicate AGREES tests whether an NP and a verb agree in number. 6. A structure S "has NP empty" if S is either: • a preposition structure with NP empty; • a clause with no NP filled; • a clause with NP1 filled & VERB filled & either the verb is ITansitive or it is ditransitive, passive form; • a clause with .NP1 filled & NP2 filled and ~ is ditraneitive, not pasei.ve form. 7. A relative clause is a clause with FUNCTION = RC or THAT. 8. A sol)clause is • relative clause or a clause with FUNCTION = INF or WHAT. NOTES 1. Of course, this is just a subset of the miss NEXUS actually uses. Not shown, for example, are rules for questions, adverbs, participles, many other important coostruction¢ 2. Even in the full parser, there are no rules for determining the internal structure of noun phrases. 11hat task is handled by the intemretar. 3. The noun rules will always insert a new NP constituent into an empty NP slot if such a slot is available. Hence, they will always fill NP3 in a clause with • ditrartsitive verb, end NP2 in clause which can take a clausal complement, even if these noun phrases turn out to be the initial NPs of relative or complement clauses. Such misettachments are detected by the fourth and fifth verb rules, which respond by generating the proper structures. 4. A clause with FUNCTION = THAT represents either a complement or a relative clause. The choice is made when the stack is collapsed. 5. The word that as sole NP constituent is either the demonstrative pronoun or a placeholder for a subsequent WHAT compiemenL The choice is made when the stack is collapsed. 84
1986
13
COPYING IN NATURAL LANGUAGES, CONTEXT-FREENESS, AND QUEUE GRAMMARS Alexis Manaster-Ramer University of Michigan 2236 Fuller Road #108 Ann Arbor, MI 48105 ABSTRACT The documentation of (unbounded-len~h) copying and cross-serial constructions in a few languages in the recent literature is usually taken to mean that natural languages are slightly context-sensitive. However, this ignores those copying constructions which, while productive, cannot be easily shown to apply to infinite sublanguages. To allow such finite copying constructions to be taken into account in formal modeling, it is necessary to recognize that natural languages cannot be realistically represented by formal languages of the usual sort. Rather, they must be modeled as families of formal languages or as formal languages with indefinite vocabularies. Once this is done, we see copying as a truly pervasive and fundamental process in human language. Furthermore, the absence of mirror-image constructions in human languages means that it is not enough to extend Context-free Grammars in the direction of context-sensitivity. Instead, a class of grammars must be found which handles (context-sensitive) copying but not (context-free) mirror images. This suggests that human linguistic processes use queues rather than stacks, making imperative the development of a hierarchy of Queue Grammars as a counterweight to the Chomsky Grammars. A simple class of Context-free Queue Grammars is introduced and discussed. Introduction The claim that at least some human languages cannot be described by a Context-free Grammar no matter how large or complex has had an interesting career. In the late 1960's it might have seemed, given the arguments of Bar-Hillel and Shamir (1960) about respectively coordinations in English, Postal (1964) about reduplication-cum-incorporation of object noun stems in Mohawk, and Chomsky (1963) about English comparative deletion, that this claim was firmly established. Potentially serious--and at any rate embarrassing-- problems with both the formal and the linguistic aspects of these arguments kept popping up, however (Daly, 1974; Levelt, 1974), and the partial fixes provided by Brandt Corstius (as reported in Levelt, 1974) for the respectively arguments and by Langendoen (1977) for that as well as the Mohawk argument did not deter Pullum and Gazdar (1982) from claiming that "it seems reasonable to assume that the natural languages are a proper subset of the infinite- cardinality CFL's, until such time as they are validly shown not to be". Two new arguments, Higginbotham's (1984) one involving such that relativization and Postal and Langendoen's (1984) one about sluicing were dismissed on grounds of descriptive inadequacy by Pullum (1984a), who, however, suggested that the Langendoen and Postal (1984) argument about the doubling relativization construction may be correct (all these arguments deal with English). Pullum (1984b) likewise heaped scorn on my argument that English reshmuplicative constructions show non-CFness, but he accepted (1984a; 1984b) Culy's (1985) argument about noun reduplication in Bambara and Shieber's (1985) one about Swiss German cross-serial constructions of causative and perception verbs and their objects. Gazdar and Pullum (1985) also cite these two, as well as an argument by Carlson (1983) about verb phrase reduplication in Engenni. They also refer to my discovery of the X or no X ... construction in English I and mention that "Alexis Manaster- Ramer ... in unpublished lectures finds reduplication constructions that appear to have no length bound in Polish, Turkish, and a number of other languages". While they do not refer to my 1983 reshmuplication argument, which they presumably still reject, the Turkish construction they allude to was cited in my 1983 paper and is similar to the English reshmuplication in form as well as function (see below). In any case, the acceptance of even one case of non- CFness in one natural language by the only active advocates of the CF position would seem to suffice to remove the issue from the agenda. Any additional arguments, such as Kac (to appear), Kac, Manaster-Ramer, and Rounds (to appear), and Manaster-Ramer (to appear a; to appear b) may appear to be no more than flogging of dead horses. However, as I argued in Manaster-Ramer (1983) and as recent work (Manaster- Ramer, to appear a; Rounds, Manaster-Ramer, and Friedman, to appear) shows ever more clearly, this conception of the issue (viz., Is there one natural languages that is weakly noncontext-free?) makes very little difference and not much sense. First of all, if non-CFness is so hard to find, then it is presumably linguistically marginal. Second, weak generative arguments cannot be made to work for natural languages, because of their high degree of structural ambiguity and the great difficulty in excluding every conceivable interpretation on which an apparently ungrammatical string might turn out-on reflection--to be in the language. Third, weak generative capacity is in any case not a very interesting property of a formal grammar, especially from a linguistic point of view, since linguistic models are judged by other criteria (e.g., natural languages might well be regular without this making CFGs any the more attractive as models for them). Fourth, results about the place of natural languages in the Chomsky Hierarchy seem to be should be considered in light of the fact that there is no reason to take the Chomsky Hierarchy as the appropriate formal space in which to look for them. Fifth, models of natural languages that are actually in use in theoretical, computational, and descriptive linguistics are -and always have been--only remotely related to the Chomsky Grammars, which means that results about the latter may be of little relevance to linguistic models. 85 As I argued in 1983, we should go beyond piecemeal debunking of invalid arguments against CFGs and by the same token it seems to me that we must go beyond piecemeal restatements of such arguments. Rather, we should focus on general issues and ones that have implications for the modeling of human languages. One such issue is, it seems to me, the kind of context-sensitivity found in natural languages. It appears that the counterexamples to context- freeness are all rather similar. Specifically, they all seem to involve some kind of cross-serial dependency, i.e., a dependency between the nth elements of two or more substrings. This--unlike the statement that natural languages are noncontext-free--might mean something if we knew what kinds of models were appropriate for cross-serial dependencies. Given that not every kind of context-sensitive construction is found in human languages, it should be clear that there is nothing to be gained by invoking the dubious slogan of context-sensitivity. Another relevant question is the centrality or peripherality of these constructions in natural languages. The relevant literature makes it appear that they are somewhat marginal at best. This would explain the tortured history of the attempts to show that they exist at all. However, this appears to be wrong, at least when we consider copying constructions. The requirement of full or near identity of two or more subparts of a sentence (or a discourse) is a very widespread phenomenon. In this paper, I will focus on the copying constructions precisely because they are so common in human languages. In addition to such questions, which appear to focus on the linguistic side of things, there are also the more mathematical and conceptual problems involved in the whole enterprise of modeling human languages in formal terms. My own belief is that both kinds of issues must be solved in tandem, since we cannot know what kind of formal models we want until we know what we are going to model, and we cannot know what human languages are or are not like until we know hot, to represent them and what to compare them to. This paper is intended as a contribution to this kind of work. Copying Dependencies The examples of copying (and other) constructions which have figured in the great context-freeness debate have all involved attempts to show that a whole (natural) language is noncontext free. Now, while it is often easy to find a noncontext-free subset of such a language, it is not always possible to isolate that subset formally from the rest of the language in such a way as to show that the language as a whole is noncontext-free. There is so much ambiguity in natural languages that it is strictly speaking impossible to isolate any construction at the level of strings, thus invalidating all arguments against CFGs or even Regular Grammars that refer to weak generative capacity. However, the arguments can be reconstructed by making use of the notion of classificatory capacity of formal grammars, introduced in Manaster-Ramer (to appear a) and Manaster- Ramer and Rounds (to appear). The classificatory capacity is the set of languages generated by the various subgrammars of a grammar, and if we are willing to assume that linguists can tell which sentences in a language exemplify the same or different syntactic patterns, then we can usually simply demonstrate that, e.g., no CFG can have a subgrammar generating all and only the sentences of some particular construction if that construction involves reduplication. This will shot' the inadequacy of CFGs, even if the string set as a whole may be strictly speaking regular. Note that this approach holds that it is impossible to determine with any confidence that a particular string qua string is ungrammatical, but that it may be possible to tell one construction from another, and that the latter--and not the former--is the real basis of all linguistic work, theoretical, computational, and descriptive. Finite Copying The counterexamples to context-freeness in the literature have all been claimed to crucially involve expressions of unbounded length. This seemed necessary in view of the fact that an upper bound on length would imply finiteness of the subset of strings involved, which would as a result be of no formal language theoretic interest. However, it is often difficult to make a case for unbounded length, and the main result has been that, even though every linguist knows about reduplication, it seemed nearly impossible to find an instance of reduplication that could be used to make a formal argument against CFGs, even though no one would ever use a CFG to describe reduplication. For, in addition to reduplications that can apply to unboundedly long expressions, there is a much better known class of reduplications exemplified by Indonesian pluralization of nouns. Here it is difficult to show that the reduplicated forms are infinite in number, because compound nouns are not pluralized in the same way, and ignoring compounding, it would seem that the number of fiouns is finite. However, this number is very large and moreover it is probably not well defined. The class of noun stems is open, and can be enriched by borrowing from foreign languages and neologisms, and all of these spontaneously pluralize by reduplication. Rounds, Manaster-Ramer, and Friedman (to appear) argue that facts like this mean that a natural language should not be modeled as a formal language but rather as a family of languages, each of which may be taken as an approximation to an ideal language. In the case before us, we could argue that each of the approximations has only a finite number of nouns, for example, but a different number in different approximations. This idea, related to the work of Yuri Gurevich on finite dynamic models of computation, allows us to state the argument that the existence of an open class of reduplications is sufficient to show the inadequacy of CFGs for that family of approximations. The basis of the argument is the observation that while each of the approximate languages could in principle have a CFG, each such CFG would differ from the next not only in the addition of a new lexical item but also in the addition of a new reduplication rule (for that particular item). To capture what is really going on, we require a grammar that is the same for each approximation modulo the lexicon. This grammar in a sense generates the infinite ideal, but actually each actual approximate grammar only has a finite lexicon and hence actually only generates a finite number of reduplications. In order to model the flexibility of the natural language vocabulary, we assume that each member of the family has the same grammar modulo the terminal vocabulary and the rules which insert terminals. Another way of stating this is that the lexicon of Indonesian is finite but of an indefinite size (what Gurevich calls "uncountably finite"). A CFG would still have to contain a separate rule for the plural of every noun and henc, would have to be of an indefinite size. Thus, with 86 addition of a new noun, the grammar would have to add a new rule. However, this would mean that the grammar at any given time can only form the plurals of nouns that have already been learned. Since speakers of the language know in advance how to pluralize unfamiliar nouns, this cannot be true. Rather the grammar at any given time must be able to form plurals of nouns that have not yet been learned. This in turn means that an indefinite number of plurals can be formed by a grammar of a determinate finite size. Hence, in effect, the number of rules for plural formation must be smaller than the number of plural forms that can be generated, and this in turn means that there is no CFG of Indonesian. This brings up a crucial issue, of which we are all presumably aware but which is usually lost sight of in practice, namely, that the way a mathematical model (in this case, formal language theory) is applied to a physical or mental domain (in this case, natural language) is a matter of utility and not itself subject to proof or disproof. Formal language theory deals with sets of strings over well-defined finite vocabularies (also often called alphabets) such as the hackneyed {a, b}. It has been all too easy to fall into the trap of equating the formal language theoretic notion of vocabulary (alphabet) with the linguistic notion of vocabulary and likewise to confuse the formal language theoretic notion of a string (word) over the vocabulary (alphabet) with the linguistic notion of sentence. However, the fundamental fact about all known natural languages is the openness of at least some classes of words (e.g., nouns but perhaps not prepositions or, in some languages, verbs), which can acquire new members through borrowing or through various processes of new formation, many of them apparently not rule-governed, and which can also lose members, as words are forgotten. Thus, the well- defined finite vocabularies of formal language theory are not a very good model of the vocabularies of natural languages. Whether we decide to introduce the notion of families of languages or that of uncountably finite sets or whether we rather choose to say that the vocabulary of a natural language is really infinite (being the set of all strings over the sounds or letters of the language that could conceivably be or become lexical items in it), we end up having to conclude that any language which productively reduplicates some open word class to form some grammatical category cannot have a CFG. Copying in English It should now be noted that reduplications (and reiterations generally) are extremely common in natural languages. Just how common follows from an inspection of the bewildering variety of such constructions that are found in English. All the examples cited here are productive though they may be of bounded length. Linguistics shminguistics. Linguistics or no linguistics, (I am going home). A dog is a dog is a dog. Philosophize while the philosophizing is good! Moral is as moral does. Is she beautiful or is she beautiful? These are clause-level constructions, but we also find ones restricted to the phrase level. (He) deliberates, deliberates, deliberates (all day long). (He worked slowly) theorem by theorem. (They form) a church within a church. (He debunks) theory after theory. Also relevant are cases where a copying dependency extends across sentence boundaries, as in discourses like: A: She is fat. B: She is fat, my foot. It is interesting that several of these types are productive even though they appear to be based on what originally must have been more restricted, idiomatic expressions. The pattern a X within a X, for example, is surely derived from the single example a state within a state, yet has become quite productive. Many of these patterns have analogues in other languages. For example, the X after X construction appears to involve quantification and this may be related to the fact that, for example, Bambara uses reduplication to mean 'whatever' and Sanskrit to mean 'every' (P~nini 8.1.4). English reshmuplication has close analogues in many languages, including the whole Dravidian and Turkic language families. Tamil kiduplication (e.g. pustakam kistakarn) and Turkish meduplication (e.g., kitap mitap) are instances of this, though the semantic range is somewhat different. In both of these, the sense is more like that of English books and things, books and such, i.e., a combination of deprecation and etceteraness rather than the purely derisive function of English books shmoohs. The English X or no X ... pattern is very similar to a Polish construction consisting of the form X (nominative) X (instrumental) ... in its range of applications. The repetition of a verb or verbal phrase to deprecate excessive repetition or intensity of an action seems to be found in many languages as well. I have not tried here to survey the uses to which copying constructions are put in different languages or even to document fully their wide incidence, though the examples cited should give some indication of both. It does appear that copying constructions are extremely common and pervasive, and this in turn suggests that they are central to man's linguistic faculties. When we consider such additional facts as the frequency of copying in child language, we may be tempted to take copying as one of the basic linguistic operations. Copies vs. mirror images The existence and the centrality of copying constructions poses interesting questions that go beyond the inadequacy of CFGs. For example, why should natural languages have reduplications when they lack mirror-image constructions, which are context-free? This asymmetry (first noted in Manaster-Ramer and Kac, 1985, and Rounds, Manaster- Ramer, and Friedman op. cit.) argues that it is not enough to make a small concession to context-sensitivity, as the saying goes. Rather than grudgingly clambering up the Chomsky Hierarchy towards Context-sensitive Grammars, we should consider going back down to Regular Grammars and striking 87 out in a different direction. The simplest alternative proposal is a class of grammars which intuitively have the same relation to queues that CFGs have to stacks. The idea, ~vhich I owe to Michael Kac, would be that human linguistic processes make little if any use of stacks and employ queues instead. Queue Grammars This suggests that CFGs are not just inadequate as models of natural languages but inadequate in a particularly damaging way. They are not even the right point of departure, since they not only undergenerate but also overgenerate. This leads to the idea of a hierarchy of grammars whose relation to queues is like that of the Chomsky Grammars to stacks. A queue-based analogue to CFG is being developed, under the name of Context-free Queue Grammar. The current version is allowed rules of the following form: A->a A --> aB A -- > aB...b A --> a...b A --> ...B Whatever appears to the right of the three dots is put at the end of the string being rewritten. Otherwise, all definitions are as in a corresponding restricted CFG. Thus, the grammar S - > aS...a S - > bS...b S --> a...a S --> b...b will generate the copying language over {a,b} excluding the null string and define derivations like the following: S -> aSa -> abSab --> abaaba S -> bSb --> baSba - > baaSbaa --> baabSbaab On the other hand, I conjecture that the corresponding xmi(x) language cannot be generated by such a grammar. Even at this early stage of inquiry into these formalisms, then, we have some tangible promise of being able to explain why natural languages should have reduplications but not mirror-image constructions. Various xh(x) constructions such as the respectively ones and the cross-serial verb constructions can be handled in the same way as reduplications. While the idea of taking queues as opposed to stacks as the principal nonfinite-state resource available to human linguistic processes would explain the prevalence of copying and the absence of mirror images, it does not explain the coexistence of center-embedded constructions with cross-serial ones or the relative scarcity of cross-serial constructions other than copying ones. For this reason, if for no other, the CFQGs could not be an adequate model of natural language. In fact, there are further problems with these grammars. One way in which they fail is that they apparently can only generate two copies--or two cross-serially dependent substrings--whereas natural languages seem to allow more (as in Grammar is grammar is grammar). This is similar to the limitation of Head Grammars and Tree Adjoining Grammars to generating no more than four copies (Manaster-Ramer to appear a). However, a more general class of Queue Grammars appears to be within reach which will generate an arbitrary number of copies. Perhaps more serious is the fact that CFQGs apparently can only generate copying constructions at the cost of profligacy (as defined in Rounds, Manaster-Ramer, and Friedman, to appear). The repair of this defect is less obvious, but it appears that the fundamental idea of basing models of natural languages on queues rather than stacks is not undermined. Rather, what is at issue is the way in which information is entered into and retrieved from the queue. The CFQGs suggest a piecemeal process but the considerations cited here seem to argue for a global one. A number of formalisms with these properties are being explored. On the other hand, it may be that something much like the simple CFQG is a natural way of capturing cross-serial dependencies in cases other than copying. To see exactly what is involved, consider the difference between copying and other cross-serial dependencies. This difference has little to do with the form of the strings. Rather, in the case of other cross-serial dependencies, there is a syntactic and semantic relation between the nth elements of two or more structures. For example, in ~ respectively construction involving a conjoined subject arid a conjoined predicate, each conjunct of the former is semantically combined with the corresponding conjunct of the latter. In the case of copying constructions, there is nothing analogous. The corresponding parts of the two copies do not bear any relations to each other. Thus it makes some sense to build up the corresponding parts of cross-serial construction in a piecemeal fashion, but this appears to be inapplicable in the case of copying constructions. In view of all these limitations, the CFQGs might seem to be a non-starter. However, their importance lies in the fact that they are the first step in reorienting our notions of the formal space for models of natural language. Any real success in the theoretical models of human language depends on the development of appropriate mathematical concepts and on closing the gap between formal language and natural language theory. One of the first steps in this direction must involve breaking the spell of CFGs and the Chomsky Hierarchy. The CFQGs seem to be cut out for this task. Moreover, the idea that queues rather than stacks are involved in human language appears to be correct, and this more general result is independent of the limitations of CFQGs. However, given my stated goals for formal models, it is necessary to develop models such as CFQGs before proceeding to more complex ones precisely in order to develop an appropriate notion of formal space within which we will have to work. The other main point addressed in this paper, the need to model human languages as families of formal languages or as formal languages with indefinite terminal vocabularies, is intended in the same spirit. The allure of identifying formal language theoretic cor~cepts with linguistic ones in the simplest possible way is hard to overcome, but it must be if 88 we are to get any meaningful results about natural languages through the formal route. It will, again, be necessary to do more work on these concepts, but it is beginning to look as though we have found the right direction. REFERENCES Carlson, Greg N. 1983. Marking Constituents. Linguistic Categories (Frank Heny and Barry Richards, eds.), 1: Categories, 69-98. Dordrecht: Reidel. Chomsky, Noam. 1963. Formal Properties of Grammars. Handbook of Mathematical Psychology (R. Duncan Luce at al., eds.), 2: 323-418. New York: Wiley. Culy, Christopher. Vocabulary of Bambara. 345-351. 1985. The Complexity of the Linguistics and Philosophy, 8: Daly, R. T. 1974. Applications of the Mathematical Theory of Linguistics. The Hague: Mouton. Gazdar, Gerald, and Geoffrey K. Pullum. 1985. Computationally Relevant Properties of Natural Languages and Their Grammars. New Generation Computing, 3: 273- 306. Higginbotham, James. 1984. English is not a Context- free Language. Linguistic Inquiry, 15: 225-234. Kac, Michael B. To appear. Surface Transitivity and Context-freeness. Kac, Michael B., Alexis Manaster-Ramer, and William C. Rounds. To appear. Simultaneous-distributive Coordination and Context-freeness. Computational Linguistics. Langendoen, D. Terence. 1977. On the Inadequacy of Type-3 and Type-2 Grammars for Human Languages. Studies in Descriptive and Historical Linguistics: Festschrift for Winfred P. Lehmann (Paul Hopper, ed.), 159-171. Amsterdam: Benjamins. Langendoen, D. Terence, and Paul M. Postal. 1984. Comments on Pullum's Criticisms. CL, 8: 187-188. Levelt, W. J. M. 1974. Formal Grammars in Linguistics and Psycholinguistics. The Hague: Mouton. Manaster-Ramer, Alexis. 1983. The Soft Formal Underbelly of Theoretical Syntax. CLS, 19: 256-262. Manaster-Ramer, Alexis. To appear a. Dutch as a Formal Language. Linguistics and Philosophy. Manaster-Ramer, Alexis. To appear b. Subject-verb Agreement in Respective Coordinations in English. Manaster-Ramer, Alexis, and Michael B. Kac. 1985. Formal Languages and Linguistic Universals. Paper read at the Milwaukee Symposium on Typology and Universals. Postal, Paul M. 1964. Limitations of Phrase Structure Grammars. The Structure of Language: Readings in the Philosophy of Language (Jerry A. Fodor and Jerrold J. Katz, eds.), 137-151. Englewood Cliffs, NJ: Prentice-Hall. Postal, Paul M., and D. Terence Langendoen. 1984. English and the Class of Context-free Languages. CL, 10:177-181. Pullum, Geoffrey K., and Gerald Gazdar. 1982. Natural Languages and Context-free Languages. Linguistics and Philosophy, 4: 471-504. Pullum, Geoffrey K. 1984a. On Two Recent Attempts to Show that English is not a CFL. CL, 10: 182-186. Pullum, Geoffrey K. 1984b. Syntactic and Semantic Parsability. Proceedings of COLING84, 112-122. Stanford, CA: ACL. Rounds, William C., Alexis Manaster-Ramer, and Joyce Friedman. To appear. Finding Natural Languages a Home in Formal Language Theory. Mathematics of Language (Alexis Manaster-Ramer, ed.). Amsterdam: John Benjamins. Shieber, Stuart M. 1985. Evidence against the Context- freeness of Natural Language. Linguistics and Philosophy, 8: 333-343. 89
1986
14
A MODEL OF REVISION IN NATURAL LANGUAGE GENERATION Marie M. Vaughan David D. McDonald Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 ABSTRACT We outline a model of generation with revision, focusing on improving textual coherence. We argue that high quality text is more easily produced by iteratively revising and regenerating, as people do, rather than by using an architecturally more complex single pass generator. As a general area of study, the revision process presents interesting problems: Recognition of flaws in text requires a descriptive theory of what constitutes well written prose and a parser which can build a representation in those terms. Improving text requires associating flaws with strategies for improvement. The strategies, in turn, need to know what adjustments to the decisions made during the initial generation will produce appropriate modifications to the text. We compare our treatment of revision with those of Mann and Moore (1981), Gabriel (1984), and Mann (1983). 1. INTRODUCTION / Revision is a large part of the writing process for people. This is one respect in which writing differs from speech. In ordinary conversation we do not rehearse what we are going to say; however, when writing a text which may be used more than once by an audience which is not present, we use a multipass system of writing and rewriting to produce optimal text. By reading what we write, we seem better able to detect flaws in the text and see new options for improvement. Why most people are not able to produce optimal text in one pass is an open and interesting question. Flower and Hayes (1980) and Collins and Gentner (1980) suggest that writers are unable to juggle the excessive number of simultaneous demands and constraints which arise in producing well written text. Writers must concentrate not only on expressing content and purpose, but also on the discourse conventions of written prose: the constraints on sentence, paragraph, and text structure which are designed to make texts more readable. Successive iterations of writing and revising may allow the writer to reduce the number of considerations demanding attention at a given time. The developers of natural language generation systems must also address the problem of how to produce high quality text. Most systems today concentrate on the production of dialogs or commentaries, where the texts are generally short and the coherence is strengthened by nonlinguistic context. However, in written documents coherence must be maintained by the text alone. In addition, written text must anticipate the questions of its readers. The text must be clear and well organized so that the reader may follow the points easily, and it must be concise and interesting so as to hold the reader's attention. These considerations place greater demands on a generation system. Most natural language generation systems generate in a single pass with no revision. A drawback of this approach is that the information necessary for decision making must be structured so that at any given point the generator has enough information to make an optimal decision. While many decisions require only local information, decisions involving long range dependencies, such as maintaining coherence, may require not only a history of the decisions made so far, but also predictions of what future decisions might be made and the interactions between those decisions. An alternative approach is a single pass system which incorporates provisions for revision of its internal representations at specific points in the generation process (Mann & Moore, 1981; Gabriel, 1984). Evaluating the result of a set of decisions after they have been made allows a more parsimonious distribution of knowledge since specific 90 types of improvements may be evaluated at different stages. Interactions among the decisions made so far may also be evaluated rather than predicted. The problem remains, however, of not being able to take into account the interaction with future decisions. A third approach, and the one described in this paper, is to use the writing process as a model and to improve the text in successive passes. A generation/revision system would include a generator, a parser, and an evaluation component which would assess the parse of what the generator had produced and determine strategies for improvement. Such a system would be able to tailor the degree of refinement to the particular context and audience. In an interactive situation the system may make no refinements at all, as in "off the cuff" speech; when writing a final report, where the quality of the text is more important than the speed of production, it may generate several drafts. While single pass approaches may be engineered to give them the ability to produce high quality text, the parser-mediated revision approach has several advantages. Using revision can reduce the structural demands on the generator's representations, and thus reduce the overall complexity of the system. Since the revision component is analyzing actual text with a parser, it can assess long range dependencies naturally without needing to keep a history within the generator or having it predict what decisions it might make later. Revision also creates an interesting research context for examining both computational and psychological issues. In a closed loop system, the generator and parser must interact closely. This provides an opportunity to examine how these processes differ and what knowledge may be shared between them. In a similar vein, we may use a computational model of the revision task to assess the computational implications of proposed psychological theories of the writing process. 2. DEFINING THE PROBLEM In order to make research into the problem of revision tractable, we need to first delimit the criteria by which to evaluate the text. They need to be broad enough to make a significant improvement in the readability of the text, narrow enough to be defined in terms of a representation a parser could build today, and have associated strategies for improvement that are definable in terms understood by the text planner and generator. In addition, we would like to delegate to the revision component those decisions which would be difficult for a generator to make when initially producing the text. As textual coherence often requires awareness of long range dependencies, we will begin by considering it an appropriate category of evaluation for a revision component. Coherence in text comes from a number of different sources. One is simply the reference made to earlier words and phrases in the text through anaphoric and cataphoric pronominal references; nominal, verbal and clausal substitution of phrases with elements such as 'one', 'do', and 'so'; ellipsis; and the selection of the same item twice or two items that are closely related. Coreferences create textual cohesion since the interpretation of one element in the text is dependent on another (Halliday and Hansan, 1976). Scinto (1983) describes a narrower type of cohesion which operates between successive predicational units of meaning (roughly clauses). These units can be described in terms of their "theme" (what is being talked about) and "rheme" (what is being said about it). Thematic progression is the organization of given and new information into theme-rheme patterns in successive sentences. Preliminary studies have shown (Glatt, 1982) that thematic progressions in which the theme of a sentence is coreferential with the theme or the theme of the immediately preceding sentence are easier to comprehend than those with other thematic progressions. This ease of comprehension can be attributed to the fact that the connection of the sentence with previous text comes early in the sentence. It would appear that the longer the reader must wait for the connection, the more difficult the integration with previous information will be. Another source of coherence is lexical connectives, such as sentential adjuncts ('first', 'for example', 'however'), adverbials ('subsequently', 'accordingly', 'actually'), and subordinate and coordinate conjunctions ('while', 'because', "but'). These connectives are used to express the abstract relation between two propositions explicitly, rather than leaving it to the reader to infer. Other ways of combining sentences can function to increase coherence as well. Chafe (1984) enumerates the devices used to combine "idea units" in written tex) including turning predications into modificatir 91 with attributive adjectives, preposed and postposed participles, and combining sentences using complement and relative clauses, appositives, and participle clauses. These structures function to increase connectivity by making the text more concise. Paragraph structure also contributes to the coherence of a text. "Paragraph" in this sense (Longacre, 1979) refers to a structural unit which does not necessarily correspond to the orthographic unit indicated by an indentation of the text. Paragraphs are characterized by closure (a beginning and end) and internal unity. They may be marked prosodically by intonation in speech or orthographically by indentation in writing, and structurally, such as by initial sentence adjuncts. Paragraphs are recursive structures, and thus may be composed of embedded paragraphs. In this respect they are similar to Mann's rhetorical discourse structures (Mann, 1984). 3- A MODEL OF GENERATION AND REVISION In this section we will outline a model of generation with revision, focusing on improving textual coherence. First we estabLish a division of labor within the generation/revision process. Then we look at the phases of revision and consider the capabilities necessary for recognizing deficiencies in cohesion and how they may be repaired. In the fourth section, we apply this model to the revision of an example summary paragraph. The initial generation of a text involves making decisions of various kinds. Some are conceptually based, such as what information to include and what perspectives to take. Others are grammatically based, such as what grammatical form a concept may take in the particular syntactic context in which it is being realized, or how structures may be combined. Still others are essentially stylistic and have many degrees of freedom, such as choosing a variant of a clause or whether to pied pipe in a relative clause. The decisions that revision affects are at the stylistic level; only stylistic decisions are free of fixed constraints and may therefore be changed. Changes to conceptually dictated decisions would shift the meanin~ of the text. During initial generation, euristics for maintaining local cohesion are used, ~wing on the representations of simple local ~denctes. By "local", we mean speciftcally that 92 we restrict the scope of information available to the generator to the sentence before, so that it can use thematic progression heuristics, letting revision take care of longer range coherence considerations. The revision process can be modeled in terms of three phases: I) recognition, which determines where there are potential problems in the text; 2) editing, which determines what strategies for revision are appropriate and chooses which, if any, to employ; 3) re-generation, which employs the chosen strategy by directing the decision making in the generation of the text at appropriate moments. This division reflects an essential difference in the types of decisions being made and the character of representations being used in each phase. The recognition phase is responsible for parsing the text and building a representation rich enough to be evaluated in terms of how well the text coheres. Since in this model the system is evaluating its own output, it need not rely only on the output text in making its judgements; the original message input to the generator is available as a basis for comparing what was intended with what was actually said. The goal is to notice the relationships among the things mentioned in the text and the degree to which the relationships appear explicitly. For example, the representation must capture whether a noun phrase is the first reference to an object or a subsequent reference, and if it is a subsequent reference, where and how it was previously mentioned. The recognition phase analyzes the text as it proceeds using a set of evaluation criteria. Some of these criteria look through the representation for specific flaws, such as ambiguous referents, while others simply flag places where optimizations may be possible, such as predicate nominal or other simple sentence structures which might be combined with other sentences. Other criteria compare the representation with the original plan in order to flag potential places for revision such as parallel sub-plans not realized in parallel text structure, or relations included in the plan which are expressed implicitly, rather than explicitly, in the text. Once a potential problem has been noted, the editing phase takes over. For each problem there is a set of one or more strategies for correcting it. For example, if there is no previous referent for the subject of a sentence, but there is a previous reference to the object, the sentence might be changed from active to passive; or if the subject has a relation to previous referent which is not explicitly mentioned in the text, more information may be added through modification to make that implicit connection explicit. The task of the editing phase is to determine which, if any, of these strategies to employ. (It may, for example decide not to take any action until further text has been analyzed.) However, what constitutes an improvement is not always clear. While using the passive may strengthen the coherency, active sentences are generally preferred over passives. And while adding more information may strengthen a referent, it may also make the noun phrase too heavy if there are already modifications. The criteria that choose between strategies must take into account the fact that the various dimensions along which the text may be evaluated are often in conflict. Simple evaluation functions will not suffice. The final step is actually making the change once the strategy has been chosen. This essentially involves "marking" the input to the generator, so that it will query the revision component at appropriate decision points. For example, if the goal is to put two sentences into parallel structure, the input plan which produces the structure to be changed would be marked. Then, when the generator reached that unit, it would query the revision component as to where the unit should be put in the text (e.g. a main clause or a subordinate one) and how it should be realized (e.g. active or passive). Note that as the revision process proceeds, it is continually dealing with a new text and plan, and must update its representations accordingly. New opportunities for changes will be created and previous ones blocked. We have left open the question of how the system decides when it is done. With a limited set of evaluation criteria, the system may simply run out of strategies for improvemenL The question will be more easily answered empirically when the system is implemented. An important architectural point of the design is that the system is not able to look ahead to consider later repercussions of a change; it is constrained to decide upon a course of action considering only the current state of the textual analysis and the original plan. While this constraint obviates the problems of the combinatorial explosion Of potential versions and indefinite lookahead, we must guard against the possibility of a choice causing unforeseen problems in later steps of the revision process. One way to avoid this problem is to keep a version of the text for each change made and allow the system to return to a previous draft if none of the strategies available could sufficiently improve the text. 4. PARAGRAPH ANALYSIS In this section we use the model outlined above to describe how the revision component could improve a generated text. What follows is an example of the incremental revision of a summary paragraph. The discussion at each step gives an indication of the character of information needed and the types of decisions made in the recognition, editing, and regeneration phases. The example is from the UMass COUNSELOR Project, which is developing a natural language discourse system based on the HYPO legal reasoning system (Rissland, Valcarce, & Ashley, 1984). The immediate context is a dialog between a lawyer and the COUNSELOR system. Based on information from the lawyer, the system has determined that the lawyer's case might be argued along the dimension "common employee transferred products or tools". The system summarizes a similar case that has been argued along the same dimension as an example. The information to be included in the summary is chosen from the set of factual predicates that must be satisfied in order for the particular dimension to apply. In the initial generation of the summary, the overall organization is guided by a default paragraph organization for a case summary. The first sentence functions to introduce the case and place it as an example of the dimension in question. The body presents the facts of the case organized according to a partial ordering based on the chronology of the events. The final sentence summarizes the case by giving the action and decision. The choice of text structure is guided by simple heuristics which combine sentences when possible and choose a structure for a new sentence based on thematic progression, so that the subject of the new sentence is related to the theme or rheme of the previous sentence. 93 (1) The case Telex vs. IBM was argued along the dimension "common employee transferred products or tools". IBM developed the product Merlin, which is a disk storage system. Merlin competes with the T-6830. which was developed by Telex. The manager on the Merlin development project was Clemens. He left IBM in 1972 to work for Telex and took with him a copy of the Merlin code. IBM sued Telex for misappropriation of trade secret information and won the case. The recognition phase analyzes the text, looking for both flaws in the text and missed opportunities. The repetition of the word "develop" in the second and third sentences alerts the editing phase to consider whether a different word should be chosen to avoid repetition, or the repetition should be capitalized on to create parallel structure. By examining the input message, it determines that these clauses were realized from parallel plans, so it chooses to realize them in parallel structure. In the regeneration phase, the message is marked so that the revision component can be queried at the appropriate moments to control when and how the information unit for "Telex developed the T-6830" will be realized. After generation of the second sentence, the generator has the choice of attaching either <develop Telex T-6830> or <compete Merlin T-6830> as the next sentence. As one of these has been marked, the revision component is queried. Its goal is to make this sentence parallel to the previous one, so it indicates that the marked unit, <develop ...>, should be the next main clause and should be realized in the active voice. Once that has been accomplished, the default generation heuristics take over to attach <competes with...> as a relative clause: (2) The case Telexvs. IBM was argued along the dimension "common employee transferred products or tools". IBM developed the product Merlin. which is a disk storage system. Telex developed the T-6830, which competes with Merlin. The menager on the Merlin development project was Clemens. He left IBM in 1972 to work for Telex end took with him a copy of the Merlin code. IBM sued Telex for misappropriation of trade secret information and won the case. Once the change is completed, the recognition phase takes over once again. It notices that sentence four no longer follows a preferred thematic progression as "Merlin" is no longer a theme or theme of the previous sentence. It considers the following possibilities: -- Create a theme-theme progression by moving sentence five before sentence four and beginning it with "Telex", as in: "Telex was who Clemens worked for after he left IBM in 1972." (Note there are no other possibilities for preferred thematic progressions without changing previous sentences.) -- Reject the previous change which created the parallel structure and go back to the original draft. -- Leave the sentence as it is. Although there is no preferred thematic progression, cohesion is created by the repetition of "Merlin" in the two sentences. -- Create an internal paragraph break by using "in 1972" as an initial adjunct. This signals to the reader that there is a change of focus and reduces the expectation of a strong connection with the previous sentences. The editor chooses the fourth strategy, since not only does it allow the previous change to be retained, but it imposes additional structure on the paragraph. Again during the regeneration phase the editor marks the information unit in the message which is to be realized differently in the new draft. Default generation heuristics choose to realize "Clemens" as a name, rather than a pronoun as it had been, and to attach "the manager..." as an appositive. (3) The case Telex vs. IBM was argued along the dimension "common employee transferred products or tools". IBM developed the product Merlin, which is a disk storage system. Telex developed the T-5830, which competes with Merlin. In 1972. Clemens. the tanager on the Merlin development project, left IBM to work for Telex ud took with him • copy of the Merlin code. IBM sued Telex for misappropriation of trade secret information end won the case. 5. OTHER REVISION SYSTEMS Few generation systems address the question of using successive refinement to improve their output. Some notable exceptions are KDS (Mann & Moore, 1981), Yh (Gabriel, 1982), and Penman (Mann, 1983). KDS and ¥h use a top down approach where intermediate representations are evaluated and improved before any text is actually generated; Penman uses a cyclic approach similar to that described here. 94 KDS uses a hill climbing module to improve text. Once a set of protosentences has been produced and grossly organized, the hill climber attempts to compose complex protosentences from simple ones by applying a set of aggregation rules, which correspond roughly to English clause combining rules. Next, the hill climber uses a set of preference rules to judge the relative quality of the resulting units and repeatedly improves the set of protosentences on the basis of those judgements. Finally, a simple linguistic component realizes the units as sentences. There are two main differences between this system and the one described in this paper. First, KDS uses a quantitative measure of evaluation in the form of preference rules which are stated independently of any linguistic context. The score assigned to a particular construction or combination of units does not consider which rules have been applied in nearby sentences. Consequently, intersentential relations cannot be used to evaluate the text for more global considerations. Secondly, KDS evaluates an intermediate structure, rather than the final text. Therefore, realization decisions, such as those made by KDS's Referring Phrase Generator, have not yet been made. This makes evaluating the strength of coherence difficult, since it is not possible to determine whether a connection will be made through modification. Yh also uses a top down improvement algorithm, however rather than having a single improvement module which applies one time, it evaluates and improves throughout the generation process. The program consists of a set of experts which do such things as construct phrases, construct sentences, and supply words and idioms. The "planner" tries to find a sequence of experts that will transform the initial situation (initially a specification to be generated) to a goal situation (ultimately text). First, experts which group the information into paragraph size sets are applied; then other experts divide those sets into sentence size chunks; next, sentence schemata experts determine sentence structure; and finally experts which choose lexical items and generate text apply. After each expert applies, critics evaluate the result and may call an expert to improve it. Like KDS, this type of approach makes editing of global coherence considerations difficult since structural decisions are made before lexical choices. The Penman System is the most similar to the one described in this paper. The principle data flow and division of labor into modules are the same: planning, sentence generation, improvement. However, an important difference is that Penman does not parse the text in order to revise it. Rather it uses quantitative measures, such as sentence length and level of clause embeddings to flag potential trouble spots. While this approach may improve text along some dimensions, it will not be capable of improving relations such as coherence, which depend on understanding the text. A similarity between Penman's revision module and the model described in this paper is that neither has been implemented. As the two systems mature, a more complete comparison may be made. 6. CONCLUSION Using the writing process as a model for generation is effective as a means of improving the quality of the text generated, especially when considering intersentential relations such as coherence. Decisions which increase coherence are difficult for a generator to make on a first pass without keeping an elaborate history of its previous decisions and being able to predict future decisions. Once the text has been generated however, revision can take advantage of the global information available to evaluate and improve coherence. The next steps in the development of the system proposed in this paper are clear: For the recognition phase, a more comprehensive set of evaluation criteria need to be enumerated and the requirements they place on a parser specified. For the editing phase, the relationships between strategies for improving text, and changes in generation decisions and variation in output text need to be explored. Finally, a prototypical model of the system needs, to be implemented so that the actual behavior of the system may be studied. 7. ACKNOWLEDGEMENTS We would like to thank John Brolio and Philip Werner for their helpful commentary in the preparation of this paper. 95 8. REFERENCES Chafe, Wallace L. (1985) "Linguistic Differences Produced by Differences Between Speaking and Writing", in Olson, David K., Nancy Torrance, & Angela Hildyard, eds. Literacy, Language and Learning: The nature and consequences of reading and writing, Cambridge University Press, pp. I05-123. Clippinger, John, & David D. McDonald (1983) "What makes Good Writing Easier to Understand", IJCAI Proceedings, pp.730-732. Collins, Allan & Dedre Gentner (1980) "A Framework for a Cognitive Theory of Writing", in Gregg & Steinburg, eds, pp. 51-72. Flower, Linda & John Hayes (1980) "The Dynamics of Composing: Making Plans and Juggling Constraints", in Gregg & Steinberg, eds, pp. 31-50. Gabriel, Richard (1984) "Deliberate Writing", to appear in McDonald & Bolc, eds. Papers on Natural Language Generation, Springer- Verlag, 1987. Glatt, Barabara S. (I 982) "Defining Thematic Progressions and Their Relationships to Reader Comprehension", in Nystrand, Martin, ed. What Writers Know." the language, process, and structure of written discourse, New York, NY: Academic Press, pp. 87-104. Gregg, L. & E.R. Steinberg, eds. (1980) Cognitive Processes in Writing, Hilldale, N J: Lawrence Erlbaum Associates. Halliday, M.A.K., & Ruqaiya Hasan (1976) Cohesion in English, London: Longman Group Ltd. Hayes, John, & Linda Fower (1980) "Identifying the Organization of Writing Processes", in Gregg & Steinberg (Eds), pp. 3-30. Longacre, R.E. (1979) "The Paragraph as a Grammatical Unit", in Syntax and Semantics, Vol 12: Discourse and Syntax, Academic Press, pp. 115-134. Mann, William C. & James Moore (1981) "Computer Generation of Multiparagraph English TeIt", American Journal of Computational Linguistics, Vol.7, No.I, Jan-Mar, pp.17-29. Mann, William C. (1983) An Overview of the Penman Text Generation System, USCIISI Technical Report RR-83- I 14. Mann, William C. (1984) Discourse Structures for Text GenerationISI Technical Report ISIIRR- 84-127. McDonald, David D. (1985) "Recovering the Speaker's Decisions during Mechanical Translation", Proceedings of the Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages, Colgate University, pp. 183-199. McDonald, David D. & James Pustejovsky (1985) "Description-directed Natural Language Generation". IJCA I Proceedings, pp.799-805. Rissland E., E. Valearce, & K. Ashley (1984) "Explaining and Arguing with Examples", Proceedings of A A A 1-84. Scinto, Leonard, F.M. (1983)"Functional Connectivity and the Communicative Structure of Text", in Petofi, Janos S. & Emel Sozer, eds. (1983) Micro and Macro Connexity of Texts, Hamburg: Buske, pp.73- I 15. 96
1986
15
The ROMPER System: Responding to Object-Related Misconceptions using Perspective 1 Kathleen F. McCoy Dept. of Computer and Information Sciences University of Delaware Newark, De. 19716 Abstract I As a user interacts with a database or expert system, s/he may reveal a misconception about the objects modeled by the system. This paper discusses the ROMPER system for responding to such misconceptions in a domain independent and context sensitive fashion. ROMPER reasons about possible sources of the misconception. It operates on a model of the user and generates a cooperative response based on this reasoning. The process is made context sensitive by augmenting the user model with a new notion of object perspective which highlights certain aspects of the user model due to previous discourse. 1 Introduction A study of transcripts of expert-user dialogues reveals that users often exhibit misconceptions about the objects mod- eled in a domain. This paper describes the ROMPER system (Responding to Object-Related Misconceptions using PERspec- tive) which is able to respond to certain classes of these mis- conceptions in a principled manner. In doing so the system sheds light not only on the process of correcting misconcep- tions, but also on issues in natural-language generation, user models, and modeling certain contextual effects by a "filtering" of the knowledge representation. The ROMPER system functions as a part of a natural- language interface to a database or expert system. Input to ROMPER is a specification that a misconception has been de- tected. In this work a miscwnception is defined to be some discrepancy between what the system believes (i.e., what is con- tained in the system knowledge base) and what the user believes (as exhibited through the conversation). The system knowledge base includes an object taxonomy and knowledge about object attributes and their possible values. Several factors may influence the structure and content of responses to queries that reveal misconceptions. These in- dude the goals of the conversstional participants. If the mis- conception is not important to these goals, the response may not address the misconception or may address it only mini- maliy. ROMPER is concerned with correcting misconceptions that are important to the current goals of the conversational participants and is thus concerned with generating a maximal IMuch of this work wu done while the author wM at the University of Pennsylvania and was partially supported by the ARO grant DAA20-84- K-0061 and the NFS grant MCS81-07290. response. This response is aimed at eliminating the discrepancy between what the user believes and what the system believes by bringing the user's knowledge into line with the system's. This means that the system must not only give the user the correct information, but must present it in such a way so as to have the user adopt that information. ROMPER has a user model available to aid in this task. The user model constitutes what the system believes the user believes about the domain. It contains the same kind of information as is contained in the sys- tem's knowledge base -- an object taxonomy and information about objects' attributes and their values. The content of the user model, however, may be very different from the content of the system's knowledge base. For instance, it may contain less information than is contained in the system knowledge base, or it may contain some information that is inconsistent with the system knowledge base. The user model will not, how- ever, contain more information than is contained in the system knowledge base since the system is assumed to be an expert in the domain. In an attempt to respond to a misconception in a natural way, the system operates on the model of the user attempting to find certain structural configurations which might indicate support for the misconception. If one of the configurations is found, then a response is generated that refutes the found sup- port. ROMPER is specifically concerned with responding to two kinds of misconceptions: those involving an object's clas- sification (which I call misclassifications) and those involving an objects attributes (which I call n~attributlons). Certain structural configurations have been identified indicating pos- sible support for both kinds of misconceptions. Each identi- fied configuration has a response strategy associated with it which may be instantiated to respond to the misconception. The whole process is made context sensitive by a new notion of object perspective which acts to filter the user model, high- lighting those aspects which are made important by previous dialogue, while suppressing others. The filtering gained by ob- ject perspective allows the same misconception by the same user to be responded to differently in different contextual situations. Output from ROMPER is a formal specification of a re- sponse. This specification is then input to the MUMBLE sys- tem [McDS01 which, using a dictionary and grammer supplied by Robin Karlin [Kar85], produces actual English text. 97 2 Misconception Responses The view of natural-language generation taken in this system is the same as that taken in [McK82]. The generation process is seen as consisting of two parts: (1) determining the content and structure of the response and producing a formal message specification, and (2) transforming that specification into actual English text. My work has concentrated on deter- mining the content and structure of a response to a misconcep- tion. It attempts to automate the process of deciding what in- formation to include in a response to a misconception by giving the system the ability to reason about certain classes of miscon- ceptions and typical ways of correcting misconceptions in one of the identified classes. This should be contrasted with the a priori listing of misconceptions and responses found in most existing systems that handle misconceptions ([SC80], [BB78], and [Woo84]). 2 The form of the responses generated by ROMPER de- rived from an analysis of transcripts of human conversational partners. These transcripts revealed that responses to state- ments containing misconceptions often include more than a sim- ple denial of the wrong information. This is particularly true in circumstances where the misconception is about something im- portant to the current goals of the participants. In addition to denying the information involved in the misconception, many misconception responses include both the corresponding cor- rect information, and additional justification for the denial and correction given. The justification often involves refuting faulty reasoning that may have led the user to the misconception. While it may seem that the kinds of faulty reasoning that the user may be using to arrive at a misconception are limitless, the transcript analysis revealed a surprisingly small number of misconception support relations that are refuted by the human experts. In addition, these few misconception support relations can be couched in terms of a knowledge base (KB) structure rather than its content. Thus a system reasoning on a model of the user might look for such relations in a domain independent fashion. If one is found, information refuting the misconception support might be included in the corrective response. To see this, let us examine the number of ways that a human expert was found to correct one of the misconception types handled by ROMPER: misclas~ifications. The strategies used by the human experts to respond to a miselassification can be exemplified by the number of possible responses to the following misconception: U. I thought whales were fish. R1. No, they are mammals. You may have thought they were fish because they are fin-bearing and live in the water. However, they are mammals since, (while fish have gills) whales breathe through lungs and feed their young with 2[Woo84] represents a departure from the canned response in that she is concerned with appropriately structuring a response to reflect a certain tutoring style. milk. R2. No, they are mammals. You may have thought they were fish since they are like the fish, sharks, in that both are large aquatic creatures and both scare people. However, whales are mammals since, (while fish have gills) whales breathe through lungs and feed their young with milk. R3. No, they are mammals. Before analyzing each one of these in detail, let us first note their similarities. Each of the above strategies can be seen as consisting of three parts: (1) a denial of the incorrect clas- sification, (2) a statement of the correct classification, and (3) an offering of justification for the denial/correction pair. This three part strategy is, in fact, typical of all of the responses found in the transcript analysis. Notice that the denial of the incorrect information and the offering of the corresponding cor- rect information is the same in each of the sample responses given. What distinguishes one kind of response strategy from another is the kind of justification given in each case. Responses R1 and R2 offer two different kinds of justi~cation while R3 of- fers no justification. Given that examples of each of the above three kinds of responses were found in the transcripts, we must ask what causes one to be used in preference to another in a particular situation. One explanation is that different beliefs about what the user believes trigger the use of each strategy. Notice that each strategy can be seen as refuting a different kind of support for the misconception. My claim is that a speaker may choose a strategy depending on the support that he/she believes the user may be using to come up with the misconception. Let us take each strategy in turn, examine what beliefs might have led to the use of that strategy, and then investigate how this information might be used by a system to generate responses to misconceptions. 3 Using Response Strategies The justification in R1 consists in the expert conceding properties that whales have in common with fish (fin-bearing and water-living), and overriding that conceded information with properties that distinguish whales from fish. The use of this strategy by an expert might be explained by the expert be- lieving that the user believes that whales and fish are similar, and that that similarity may have led to the misclassification. An expert having these beliefs might very well find it reason- able to concede that the similarity between whales and fish does indeed exist, but then go on to show that that similarity is not enough to classify whales as fish. S/he may do this, as above, by offering properties that make whales mammals instead of fish. Given that this analysis might explain a human's re- sponse to a misconception, we might have a computer system 98 adopt this strategy to respond to a misconception in a natu- ral way. First, the information included in a response like R1 can be captured in a response schema as shown below. R1 can be seen as an instantiation of this schema where OBJECT is instantlated with whale, POSITED with fish, and REAL with mammal. The shared attributes are instantiated in the obvious way. ((deny (classification OBJECT POSITED)) (state (classification OBJECT REAL)) (concede (share-attributes OBJECT POSITED ATTRIBUTES1) ) (override (share-attributes ..... POSITED ATTRIBUTES2) ) (override (share-attributes OBJECT REAL ATTRIBUTES3) ) ) The above schema is called the =like-super" schema be- cause it is used by ROMPER when the user exhibits a miscon- ception by wrongly classifying some OBJECT as a POSITED superordinate and when ROMPER determines that a probable reason for the miaclassification is that the user believes that the OBJECT and the POSITED superordinate are similar to each other. The schema captures a response like R1 by specifying a denial of the incorrect classification, a statement of the correct classification, and then an offering of justification. The justifi- cation is in the form of conceding the similarity that may have led to the miaclassification (e.g., the shared attributes), but overriding that conceded information with attributes that are not shp-ed by the OBJECT and the POSITED superordlnate but instead distinguish the two. It should be pointed out that this schema encodes two kinds of information: a domain-independent specification of the content of each proposition included in the response (e.g., an object classification or shared attributes between objects), as well as information about the rhetorical force or communica- tive role played by each proposition (e.g., a denial or state- ment or conceded information). The content specification is derived from the transcript analysis. The rhetorical force is derived from both the transcript analysis and from work done by [McK82], [MT83], and [Man84] who have developed theo- ries about the role that a proposition can play in a discourse. The goal in using such a schema is to have a specification of a response that may be filled in with information from the user model and that, when instantiated, contains enough rhetori- cal information to be turned into a cohesive English text by a tactical component. The schema above meets both of these requirements. The justification included in R2 is also in the form of a concede/overrlde pair. However, in the case of R2, rather than concede a similarity between whales and al___l fish, a similarity between whales and some subset of fish (i.e., the sharks) is conceded. The use of this response might be explained by the expert believing that the user believes whales and sharks to be similar and salient at this point in the discourse. The expert might imagine the user to have reasoned: "I don't know how to classify whales, but I do know that they are similar to sharks and I know that sharks are fish. Perhaps whales are fish as well." This analysis was used in developing ROMPER by asso- ciating a schema based on responses like R2 with a user model configuration showing a similarity between the misclassifled ob- ject and some descendent of the posited superordinate. The schema is termed the "llke-some-super" schema and is shown below: ((deny (classification OBJECT POSITED)) (state (classification OBJECT REAL)) (concede (similarity OBJECT DESCENDENT (share-attributes OBJECT DESCENDENT ATTRIBUTESI) ) ) (override (share-attributes OBJECT REAL ATTRIBUTES2) ) ) Response R3 can be thought of as the degenerate strat- egy since it contains no justification for the denial/correction pair. ROMPER instantiates the schema corresponding to R3 when neither of the two above mentioned knowledge base con- figurations can be found in the user model. So far this paper has concentrated on mlselassifications. ROMPER also handles mlsconceptions involving an object's attributes. The transcript analysis revealed three correction strategies for misattributions as exemplefied by the following responses: U. What is the interest rate on this stock? R4. Stock doesn't have an interest rate. Were you thinking of a bond? RS. Stock doesn't have an interest rate. Did you mean divi- dend? R6. Stock doesn't have an interest rate. ROMPER employs three correction schema to handle misattributlons; one for each of the response strategies shown. R4 can be seen as in instantiation of ROMPER's wrong-object schema. This schema offers an object which has the attribute involved in the misconception that the user may have either confused with the misconception object or made a bad analogy from. It is instantiated when an object is found that has the attribute involved in the misattribution and is similar to the misconception object. R5 exemplifies the wrong-attribute schema which offers an attribute that the object involved in the misconception does 99 have. This response is used when there is reason to beLieve the user may have confused the attribute involved in the mis- conception with a similar attribute that the object does have. ROMPER uses the schema when the misconception object has an attribute that is similar to the attribute involved in the mis- conception. As is the case with the misclassifications, there is a ~de- generate" schema for misattributions. This schema contains no justification for the correction and is exempLified by R6. In summary, a study of transcripts of humans responding to misconceptions reveals a great deal of regularity in the way misconceptions about objects are corrected. One can abstract a small number of response strategies for each of the various knowledge base features that might be involved in a misconcep- tion. Each of these strategies can be seen as refuting a different kind of support that the user may have for the information in- volved in the misconception. These strategies are captured as schemas in the ROMPER system and each schema is associated with a domain independent description of the kind of support it refutes. ROMPER, when faced with a misconception, oper- ates on a model of the user looking for evidence for one of the identified kinds of support. If enough evidence is found, the response to the misconception is generated by instantiating the corresponding schema. 4 Effects of Context The above section outlined a method for correcting mis- conceptions. While the method does seem to be appealing, at first glance it seems to have a major flaw. It does not seem to take into account the role that previous context plays in correcting misconceptions. The responses given by the human experts were very context dependent. In two different contexts a human expert might choose to correct the same misconcep- tion by the same user in two different ways. For example, in response to the misconception exhibited by ~I thought whales were fish ", an expert might choose R1 in one context and R2 in another. How can this be explained if the process described above is used to respond? I claim that the process of correcting misconceptions is context sensitive not because the process changes with context, but because what the process works on changes with context. In particular the piece of the user model that is analyzed in looking for possible sources of the misconception changes with context. Instead of doing the user model analysis on a flat representa- tion containing everything that the user knows at equal levels of importance, the analysis is done on a model that has been highlighted by previous discourse. Previous discourse serves to highlight certain aspects of the user model while suppressing others. Different highlighting resulting from different previ- ous discourse may cause the user model analysis to conclude that different support had been used for the misconception and therefore cause a different response strategy to be selected. Ob- ject perspective is a notion which can be used to model this contextual effect. 5 Object Perspective In this section I introduce a new notion of object per- spective as an augmentation to a standard semantic network representation. Before introducing this notion let us first ex- amine what we want this notion to account for. The notion of object perspective has previously been dis- cussed in the Literature. It can be Likened to the ~point of view ~ one takes on an object in a particular discussion. From a partic- ular point of view certain characteristics of the object seem more important than others. For instance, a particular building may be discussed from the point of view of being someones home on the one hand, and from the completely different pont of view of being an architectural work on the other. The two different views of the same building cause different groups of attributes to be important. It is this highlighting of a whole group of at- tributes that must be explained. Notice that it could not be explained by a focusing mechanism which highlights attributes which have been mentioned in the preceding discourse because many of the highLighted attributes may not have been expLicitly mentioned. What needs to be captured is the feeling that each view calls to mind a ~precompiled" set of attributes that seem to be important while that view is in effect. An attempt to explain this effect has been made by defin- ing object perspective as viewing an object as a member of one superordinate when, in fact, it may have many superordi- nates ([Gro77], [BW77], and [TWF*82]). The highlighting is achieved through a limited inheritance mechanism. An object inherits only those attributes contributed by the one superordi- nate deemed "in perspective". Thus, when a building is viewed as an architectural work, for example, it inherits only those at- tributes associated with the concept architectural-work in the generalization hierarchy. Any attributes that it might inherit from other superordinates (e.g., home) are ignored. While this notion is intuitively appealing, in practice it .is problematic (see [McC85] for details) and is unable to handle some additional ef- fects which intuitively should be handled by object perspective. Two of these effects will be discussed here. During the course of a conversation it is usually the case that more than one object will be discussed. When this hap- pens, usually the same kinds of things are discussed about the objects. In essence, a particular highlighting of attributes (or point of view) seems to be in force during the conversation. Yet, this highlighting is applied to different objects. What seems to be happening is that the conversational partners are viewing an entire group of objects from the same perspective. This cannot be accounted for by the previous definition of object perspective unless each of the objects under discussion can be said to have the same superordinate. A second effect which is not accounted for by the above definition, yet seems to hinge on object perspective, has to do 100 with the heightened importance of some objects during a dis- course. For instance, in the responses R1-R3 above, the correct classification of whale was given as mammal. It is the case, how- ever, that whales are cetaceans and cetaceans are marnmais. If the expert above thought that U. knew about cetaceans, why wasn't cetaceans given as the correct classification? Since there was no preceding discourse given in this case, some default con- text would have to be in force. Apparently, in this context cetacean did not seem important enough to mention. Yet in other contexts, on can imagine cetacean being given as the cor- rect classification even though it had not yet been explicitly referred to in the preceding discourse. The importance of the object cetacean seems to have something to do with the current perspective from which objects are being viewed. The previous definitions of object perspective do not address this issue. 5.1 Perspective: Definition and Representation I claim that all of the above criteria can be met by a simple notion of object perspective which has the following properties: First, instead of tying perspective into the generalization hierarchy of objects as has been done in the past, the new notion of perspective will be independent of that hierarchy. "Perspec- tives" which can be taken on the objects in the domain will be defined and will sit in a structure which is orthogonai to the generalization hierarchy. Second, the number of such perspectives that need be defined for the objects in a given domain of discourse is small and finite. Moreover, any given domain object may be viewed from any one of several perspectives defined for that domain. As it turns out, it will make more sense to view some of the objects in the domain through some perspectives and not others, but this is a feature of perspectives which will be taken advantage of later. Third, each perspective comprises a set of attributes with associated salience values. It is these salience values that dictate which attributes are highlighted and which are sup- pressed. Fourth, one such perspective is designated active at a particular point in the discourse. This notion of object perspective works as follows. An object or group of objects is still said to be viewed through a perspective. In particular any object which is accessed by the system is viewed through the current active perspective. However, instead of dictating which attributes an object in- herits, the active perspective affects the salience values of the attributes that an object possesses (either directly or inherited through the generalization hierarchy). The active perspective essentially acts as a filter on an object's attributes - raising the salience of and thus highlighting those attributes which have a high salience rating in the active perspective, and lowering the salience of and thus suppressing those attributes which are either given a low salience value or do not appear in the active perspective. The importance of an object in a discourse is determined by the salience values given to the attributes it possesses. The idea is that the whole becomes highlighted by having its parts highlighted. Thus, during a discussion in which the active per- spective highlights many attributes contributed by the object "cetacean" in our generalization hierarchy, cetacean will be seen as an important object. If, on the other hand, none of the attributes associated with cetacean are highlighted, then that object will be suppressed. This notion of object importance realizes the intuitive notion that it makes "more sense" to view some objects through particular perspectives than others. It makes more sense to view an object through perspectives that highlight many of the object's attributes and thereby make the object more domi- nant. Notice that we can see a certain amount of symmetry here. The perspective determines the salience of an object's attributes and the object's importance; the object and its at- tributes determines how likely the object is to be viewed from a particular perspective. 5.2 Using Perspective A model of a particular domain would include the usual object taxonomy containing all of the objects in the domain and all of the attributes those objects possess. So in our fish-mammal domain we would have sharks as a kind of fish with attributes like "scare-people s and "large-aquatic-creature". In addition, all of the attributes of fish would also be represented and sharks would inherit those attributes as well. In addition to the object taxonomy, we must build a sep- arate structure containing the perspectives that can be taken on the domain objects. One perspective we might imagine defining for the fish-mammal domain would be the "body-characteristics" perspective. In this perspective attributes like "fin-bearing ~, "have-gills", and "breathe-through.lungs ~ would be given high salience and thus highlighted. Other attributes would be sup- pressed by this perspective. Another perspective that might be defined for the fish- mammal domain might be the "common-people's-perception" perspective. This perspective might highlight attributes like "large-aquatic-creatures ~ and *scare-people". Other attributes, like "have-gills" and ~fin-bearing" might be suppressed by this perspective. ROMPER uses the highlighting from object perspective in two ways. First, during the user model analysis it uses the information to check for user model configurations which might indicate particular kinds of support for a misconception. Sec- tion 2 introduced two user model configurations which were associated with response schemas. The like-super schema was associated with a user model configuration that indicated that the user believed the misclassified object was like the posited su- perordinate. The like-some-super schema was associated with a user model configuration that indicated that the user believed the misclassified object was like some descendent of the poeited 101 superordinate. Notice that both of these user model configu- rations hinge on a similarity assessment between objects. The similarity metric used by ROMPER is one that is based on the objects' common and disjoint attributes which takes attribute salience into account [Tve77]. This metric will be discussed be- low. Since the similarity metric takes attribute salience into account and attribute salience is effected by object perspective, the active perspective can influence the selection of a miscon- ception response schema. Second, ROMPER uses the highlighting from object per- spective to instantiate the selected response schema. It at- tempts to do this using only attributes deemed important by the current perspective. 5.3 Object Similarity As was mentioned above, the object similarity metric used by ROMPER must be sensitive to context. To date, most AI sys- tems that use object similarity use a metric that is based on distance in the generalization hierarchy. Such a metric is not context sensitive. The ROMPER system uses a similarity metric based on work done in {Tve77] which allows contextual information to be taken into account. Tversky's metric, called a contrast model, is based on the common and disjoint features/propertles of the objects involved. Suppose we have two objects a and b where A is the set of properties associated with object a and B is the set of properties associated with object b. Tversky's measure can be expressed as: s(a,b) = Of(AN B) - ~f(A - B) - fir(B- A) for some 0, o~, and fl > 0. In the above equation 0, a, and/3 are parameters which alter the importance of each piece of the equation. The function f maps over the features and yields a salience rating for each, In essence, the contrast model states that the similarity of two objects is some function of their common features minus some function of their disjoint features. The importance of each par- ticulax feature involved (determined by the function f) and the importance of each piece of the equation (determined by 0, c~, and fl) may change with context. In order to use the metric, we must come up with values for the functions in the equation. Tversky suggests that the 0, c~, and fl functions might be affected by the relative promi- nence of objects a and b in the discourse. If a is relatively more important, then function 0 and a should be greater than/~ re- sulting in the attributes of the more prominent object having a greater influence over the similarity assessment. While I would conjecture that information about the focus of the discourse [Gro81], [Sid83], [GJW83] might give an indication of an ob- ject's prominence and would therefore be useful in setting the values of 0, ~, and /3, in this work I have assumed a value of 1 for the 0, a, and /3 and have concentrated on setting the f function. In the ROMPER system the f function has been set us- ing the salience values returned after the knowledge base has been filtered through object perspective. Using this setting of f the same two objects may be seen as very similar when the active perspective highlights attributes that the objects have in common and suppresses those that are disjoint between them. On the other hand, the same two objects may be seen as very different when the active perspective suppresses attributes that they have in common and highlights those that are disjoint be- tween them. This similarity metric is used by ROMPER in deciding which schema to use to respond to a particular misconception. Suppose that ROMPER must respond to the misconception ~I thought a whale was a fish" when the active perspective is the "body-characteristics ~ perspective defined above. Recall that this perspective highlighted attributes like fin-bearing, have- gills, and breathe-through-lungs. Under this perspective, at- tributes common to whales and all fish are highlighted. Using a Tversky-like similarity metric this highlighting causes whales and fish to be seen as similar. ROMPER would thus respond using the like-super schema producing a response similar to R1. If, on the other hand, the same misconception were encountered when the perspective was "common-people's- perception", the attributes that whales and all fish have in com- mon would not be highlighted. Rather, attributes llke scare- people and large-aquatic-creatures shared with just a subset of fish, the sharks, would be highlighted. Under these conditions, the similarity metric would return a low similarity rating for whales and all fish (and thus the "like-super ~ schema would not be applicable), but a high similarity rating for whales and sharks. Thus, the "like-some-super" schema would be used to produce a response similar to R2 above. One can imagine how other perspectives might make nei- ther the "like-super" nor the ~like-some-super" schemas appli- cable, causing the "no-support" schema to be used. 5.4 Choosing the Active Perspective In order for the notion of object perspective to be truly bene- ficial, there must be a mechanism for choosing the active per- spective based on previous discourse. While this topic is still very much open to investigation, some preliminary research has revealed several factors that might influence the choice of active perspective. Perhaps one of the most influential pieces of information useful in choosing a perspective is the user's current goal. In [MWM85] the user's goal completely determines which perspec- tive is active. In their work each perspective which can be taken on the domain objects is indexed by potential goals. Thus, once the system has determined what the user's goal probably is, it has also determined what perspective the user has probably taken on the domain objects. While it is true that the user's goal is a good source of information to use to determine the probable perspective, 102 other factors may also influence this choice. These include the attributes and objects mentioned so far in the dialogue. The mentioned attributes are obviously thought to be important and one would therefore expect them to be given a fairly high salience rating in the active perspective. Thus, the choice of active perspective can be narrowed dow~ to those in which the mentioned attributes appear with high salience. By the same token~ the objects mentioned so far in the dialogue can also give a clue concerning the active perspec- tive. One would expect that the active perspective would deem these objects important. Therefore the system might look for perspectives that give high salience ratings to many of the at- tributes associated with objects that have been mentioned in the discourse. In this section I have identified several factors which in- fluence the choice of active perspective. This choice, however, is a question which remains as an open research topic. Still unanswered are questions such as: When does a perspective change? How long is a perspective active? Is there a rela- tionship between a discourse unit [GS85] and perspective? Is there any structure to the space of perspectives that would put constraints on moving from one active perspective to another? These questions must be taken up in future research on per- spective. 5.5 An Example In this section an example is given which indicates how the choice of perspective influences how a misconception may be corrected. Recall that in correcting a misattribution one of the correction schemas used by ROMPER called for a similar object to be offered as a possible object of confusion. A study of transcripts reveals, however, that this schema may be instanti- ated in different ways depending on the context. Consider the following dialogue: U. I am interested in investing in some securities to use as savings instruments. I want something short-term and I don't have a lot of money to invest so the instrument must have small denominations. I am a bit concerned about the penalties for early withdrawal. What is the penalty on a T-bill? S. Treasury Bills don't have a penalty. Were you thinking of a Money Market Certificate? In this case the money market certificate was seen as being similar to the treasury bill and therefore included in the response. A different object might be used in a different context. Consider: U. I am interested in investing in some securities. Safety is very important to me, so I would probably like to get something from the government. I am a bit concerned about the penalties for early withdrawal. What is the penalty on a T-bill? S. Treasury Bills don't have a penalty. Were you thinking of a Treasury Bond? The difference in these two responses can be explained by different perspectives begin taken on the objects. Suppose that our knowledge base contains the following objects and at- tributes in the financial securities domain. Money Market Certificates Maturity: 3 months Denominations: $1,000 Issuer: Commercial Bank Penalty for Early Withdrawal: 10% Purchase Place: Commercial Bank Safety: Medium Treasury Bills Maturity: 3 months Denominations: $1,000 Issuer: US Government Purchase Place: Federal Reserve Safety: High Treasury Bond Maturity: 7 years Denominations: $500 Issuer: US Government Penalty for Early Withdrawal: 20% Purchase Place: Federal Reserve Safety: High The following perspectives might be reasonable for the domain (here we are assuming salience values from low salience of 0 to high salience of 1): Savings Instruments Maturity - 1.0 denominations - 1.0 safety - 0.5 Issuing Company issuer - 1.0 safety - 1.0 purchase-place - 0.5 Notice that the perspective of Savings Instruments highlights' maturity and denominations, and somewhat highlights safety. This indicates that when people are discussing securities as sav- ings instruments, they are most interested in how long their money will be tied up and in what denominations they can save their money. The perspective of Issuing Company, on the other hand, highlights different attributes. When securities are discussed from this perspective, things like who the company is and how stable an investment in the company is, become im- portant. Other attributes of the securities are ignored (recall that attributes not mentioned in the perspective get assigned a low salience rating). 103 • I Consider how perspective might effect the misconception response. Given the discourse preceding the utterance contain- ing the misconception in our first dialogue, it is reasonable to assume that the perspective of "Savings Instruments" is the active perspective at the time of the misconception utterance. 3 A system attempting to respond to this misconception might proceed by attempting to instantiate the wrong object schema described above. Recall that this schema is applicable when there is a similar object which has the property involved in the misconception. The system might collect all objects which have the attribute in question and then test their similarity with the object involved in the misconception. In our knowledge base there are two objects which have the attribute involved in the misconception: Money Market Certificates and T-Bonds. Suppose the attributes of these objects were assigned the salience values given by the Savings Instrument perspective. Applying the Tversky metric using the salience values attached by this perspective (and assuming a value of 1 for ~,al and fl) we get: s(T-Bill, P~4-Cert) = f(maturity, denom) - f(safety) = 2 - .8 = 1.8 ===> high similarity s(T-Bill, T-Bond) = fCsafety) -f(maturity, denom) = .5 -2 = -I.5 ===> low similarity With these calculations the system would choose the Money Market Certificate as the possible object of confusion and respond: S. Treasury Bills don't have a penalty. Were you thinking of a Money Market Certificate? Contrast the above calculations with calculations that might occur given a different active perspective. The discourse preceding the misconception utterance in the second example suggests the active perspective of "Issuing Company". Using the salience values attached by this perspective the similarity metric would produce the following calculations: sCT-Bill, ~ Cert) = f() - f(issuer, safety, purchase) = 0 - 2.5 = -2.5 ===> low similarity s(T-Bill, T-Bond) = f(issuer, safety, purchase) -fO -- 2.5 - 0 = 2.5 ===> high similarity in this case a reasonable response by the system would be: S. Treasury Bills don't have a penalty. Were you thinking of a Treasury Bond? As the examples show, changes in the active perspective can account for the same misconception begin responded to in two different ways. SROMPER does not calculate the active perspective. Instead, it is input to the system. 6 Conclusion If we want our natural-language front-ends to database or expert systems to mimic human behavior, they must have the ability to handle misconceptions. This paper has described a methodology for handling object-related misconceptions and has illustrated this methodology on misconceptions involving object misclassificatlons. The proposed method for responding to object-related misconceptions requires associating response schemas with cer- tain structural configurations of the user model. The response schemas described in this paper were derived from a corpus of transcripts and were associated with user model configurations that would explain their use by a human expert in responding to a misconception. A system might use the pairing of strategies to configu- rations upon encountering an object-related misconception by searching the user model for one of the identified configurations. If one was found, the associated schema could be instantiated to generate a corrective response. The context-dependent nature of responses to miscon- ceptions is accounted for not by having the pracess of correct- ing misconceptions change with context, but rather by having what the process tnorks on change with context. A new notion of object perspective was introduced as an augmentation to a flat semantic network representation of the user. Object per- spective provides a highlighting of the user model as a result of previous discourse. This resulting user model was shown suffi- cient for accounting for different responses being given to the same misconception in different situations. 7 Acknowledgements I would like to thank my advisors, Aravind Joshi and Bonnie Webber, for their many helpful comments throughout the course of this work. Special thanks also go to Sandra Car- berry and Martha Pollack for their comments on various drafts of this paper. References [BB78] [Bw77] [GJWS3] J.S. Brown and R.R. Burton. Diagnostic models for procedural bugs in basic mathematical skills. Cog- nitive Science, 2(2):155-192, 1978. D. G. Bobrow and T. Winograd. An overview of krl, a knowledge representation language. Cognitive Science, 1(1):3-46, January 1977. B. Grosz, A.K. Joshi, and S. Weinstein. Providing a uniEecl account of definite noun phrases in dis- course. In Proceedings of the ~1~ A...al Meeting, pages 44-50, Association for Computational Lin- gnistics, Cambridge, Mass, June 1983. 104 [0ro77] [Gro81] {GSSS] [Kar85] [Man84] fMcC85] [McD80] [McKS2] [MT83] [MWM851 (scsq [Sid83] [Tve77] B. Grosz. The Representation and Use of Focus in Dialogue Underatanding. Technical Report 151, SRI International, Menlo Park Ca., 1977. B. Grosz. Focusing and description in natural lan- guage dialogues. In B. Webber A. Joshi and I. Sag, editors, Element8 of Discourse Understanding, pages 85--105, Cambridge University Press, Cam- bridge, England, 1981. B. Grosz and C. Sidner. Discourse structure and the proper treatment of interruptions. In Proceedings of the 1985 Joint Conference on Artificail Intelligence, IJCAI85, Los Angeles, Ca., August 1985. Robin Karlin. Romper Mumbles. Technical Report, University of Pennsylvania, May 1985. W. C. Mann. Discourse structures for text gener- ation. In Proceedings of ColingS~, pages 367-375, Association for Computational Linguistics, Stanford University, Ca., July 1984. K.F. McCoy. Correcting Object-Related Mioconcep- tions. PhD thesis, University of Pennsylvania, De- cember 1985. D. D. McDonald. Natural Language Production as a Process of Decision Making Under Constraint. PhD thesis, MIT, 1980. K. McKeown. Generating Natural Language Tezt in Response to ~estions About Database Structure. PhD thesis, University of Pennsylvania, May 1982. W. C. Mann and S. A. Thompson. Rela- tional Propositions in Discourse. Technical Re- port ISI/RR-83-115, ISI/USC, November 1983. K. McKeown, M. Wish, and K. Matthews. Tailor- ing explanations for the user. In Proceedings of the 1985 Conference, Int'l Joint Conference on Artificial Intelligence, Los Angeles CA, August 1985. A.L. Stevens and A. Collins. Multiple conceptual models of a complex system. In Pat-Anthony Fed- erico Richard E. Snow and William E. Mon- tague, editors, Aptitude, Learning, and Instruction, pages 177-197, Erlbaum, Hillsdale, N.J., 1980. C. L. Sidner. Focusing in the comprehension of definite anaphora. In Michael Brady and Robert Berwlck, editors, Computational Models of Din- course, pages 267-330, MIT Press, Cambridge, Ma, 1983. A. Tversky. Features of similarity. Psychological Re~iew, 84:327-352, 1977. {TWF*82] F. Tou, M. Williams, R. Fikes, A. Henderson, and T. Malone. Rabbit: an intelligent database assistant. In Proceedings of AAAI-8~, pages 314-317, AAAI, Carnegie-Mellon University, August 1982. [Woo84] Beverly P. Wooff. Conte~t Dependent Planning in a Machine Tutor. PhD thesis, University of Mas- sachusctts, May 1984. 105
1986
16
Encodinl~ and Acquiring Meanings for-Figurative Phrases * Michael G. Dyer Uri Zernik Artificial Intelligence Laboratory Computer Science Department 3531 Boelter Hall University of California Los Angeles, California 90024 Abstract 1.1 The Task Domain Here we address the problem of mapping phrase meanings into their conceptual representations. Figurative phrases are pervasive in human communication, yet they are difficult to explain theoretically. In fact, the ability to handle idiosyncrat- ic behavior of phrases should be a criterion for any theory of lexical representation. Due to the huge number of such phrases in the English language, phrase representation must be amenable to parsing, generation, and also to learning. In this paper we demonstrate a semantic representation which facili- tates, for a wide variety of phrases, both learning and parsing. 1. Introduction The phrasal approach to language processing [Backer75, Pawley83, Fillmore86] emphasizes the role of the lexicon as a knowledge source. Rather than maintaining a single generic lexical entry for each word, e.g.: take, the lexicon contains many phrases, e.g.: take over, take it or leave it, take it up with, take it for granted, etc. Although this approach proves effective in parsing and in generation [Wilensky84], there are three problems which require further investigation. First, phrase interaction: the lexicon provides representation for single phrases, such as take to task and make up one' s mind. Yet it is required to analyze complex clauses such as he made up his mind to take her to task. The problem lies with the way the meanings of the two phrases interact to form the compound meaning. Second, phrase ambiguity: [Zernik86] phrasal parsing shifts the task from single-word selection to the selection of entire lexical phrases. When a set of lexical phrases appear syntactically equivalent, i.e.: he ran into a friend, he ran into an 1986 Mercedes, he ran into the store, and he ran into trouble again, disambiguation must be performed by semantic means. The conditions which facilitate phrase discrimination reside within each lexical entry itself. Third, phrase idiosyncracy: the meaning representation of phrases such as: lay down the law VS. put one' s foot down, must distinguish the special use of each phrase. This paper is concerned in the representation of phrase meanings and the process of acquiring these meanings from examples in context. * This research was supported in part by a grant from the ITA Foundation. Consider the figurative phrases in the sentences below, as they are parsed by the program RINA [Zernik85a]. Sh The Democrats in the house carried the water for Reagan's tax-reform bill.** $2: The famous mobster evaded prosecution for years. Finally, they threw the book at him for tax evasion. Depending on the contents of the given lexicon, the program may interpret these sentences in one of two ways. On the one hand, assuming that the meaning of a phrase exists in the lexi- con, the program applies that meaning in the comprehension of the sentence. In S1, the program understands that the Demo- cratic representatives did the "dirty" work in passing the bill for Reagan. On the other hand, if the figurative phrase does not exist in the lexicon, an additional task is performed: the program must figure out the meaning of the new phrase, using existing knowledge: First, the meanings given for the single words carry and water are processed literally. Second, the context which exists prior to the application of the phrase, pro- vides a hypothesis for the formation of the phrase meaning. A dialog with RINA proceeds as follows: RINA: They moved water? User: No. The Democrats carried the water for Reagan. RINA: They helped him pass the bill? Thus, RINA detects the metaphor underlying the phrase, and using the context, it learns that carry the water means helping another person do a hard job. Consider encounters with three other phrases: Jenny wanted to go punk but her father $3: laid down the law. $4: put his foot down. $5: read her the riot act. In all these cases, it is understood from the context that Jenny's father objected to her plan of going punk (aided by the word but which suggests that something went wrong with Jenny's goals). However, what is the meaning of each one of the phrases, and in particular do all these phrases convey ident- ical concepts? ** This sentence was recorded off the ABe television program Nightline, December 12, 1985. 106 1.2 The Issues In encoding meanings of figurative phrases, we must ad- dress the following issues. Underlying Knowledge What is the knowledge required in order to encode the phrase throw the book? Clearly, this knowledge includes the situation and the events that take place in court, namely the judge punishing the defendant. The phrase carry the water, for example, requires two kinds of knowledge: (a) Knowledge about the act of carrying water which can support the analysis of the phrase metaphor. (b) Knowledge about general plans and goals, and the way one person agrees to serve as an agent in the execution of the plans of another person. This knowledge supports the analysis of the context. While the phrases above could be denoted in terms of plans and goals, other phrases, i.e.: rub one's nose in it, climb the walls, and have a chip on one's shoulder require knowledge about emotions, such as embarrassment and frustration. Unless the program maintains knowledge about resentment, the phrase have a chip on the should- er, for example, cannot be represented. Thus, a variety of knowledge structures take place in encoding figurative phrases. Representing Phrase Meanings and Connotations The appearance of each phrase carries certain implica- tions. For example, John put his foot down implies that John refused a request, and on the other hand, John read the riot act implies that he reacted angrily about a certain event in the past. John gave Mary a hard time implies that he re- fused to cooperate, and argued with Mary since he was an- noyed, while John laid down the law implies that John im- posed his authority in a discussion. The representation of each phrase must account for such implications. Three different phrases in sentences $3-$5 are applied in the same context. However, not any phrase may be applied in every context. For example, consider the context established by this paragraph: $6: Usually, Mary put up with her husband's cook- ing, but when he served her cold potatoes for breakfast, she put her foot down. Could the phrase in this sentence be replaced by the other two phrases: (a) lay down the law, or (b) read the riot act? While understandable, these two phrases are not appropriate in that context. The sentence she read him the riot act does not make sense in the context of debating food taste. The sentence she laid down the law does not make as much sense since there is no argument between individuals with non-equal authority. Thus, there are conditions for the appli- cability of each lexical phrase in various contexts. These con- ditions support phrase disambiguation, and must be included as pan of a phrase meaning. Phrase Acquisition Phrase meanings are learned from examples given in con- text. Suppose the structure and meaning of put one' s foot down is acquired through the analysis of the following sen- tences: $6: Usually, Mary put up with her husband's cook- ing, but when he served her cold potatoes for breakfast, she put her foot down. S7: Jenny was dating a new boyfriend and started to show up after midnight. When she came at 2am on a weekday, her father put his foot down: no more late dates. 58: From time to time I took money from John, and I did not always remember to give it back to him. He put his foot down yesterday when I asked him for a quarter. Since each example contains many concepts, both appropriate and inappropriate, the appropriate concepts must be identified and selected. Furthermore, although each example provides only a specific episode, the ultimate meaning must be general- ized to encompass further episodes. Literal Interpretation Single-word senses (e.g.: the sense of the panicle into in run into another ear), as well as entire metaphoric actions (e.g.: carry the water in the Democratic representa- tives carried the water for Reagan's tax-reform bill) take pan in forming the meaning of unknown figurative phrases. Can the meaning of a phrase be acquired in spite of the fact that its original metaphor is unknown, as is the case with read the riot act (what act exactly?) or carry the water (carry what water)? 2. The Program The program RINA [Zernik85b] is designed to parse sen- tences which include figurative phrases. When the meaning of a phrase is given, that meaning is used in forming the concept of the sentence. However, when the phrase is unknown, the figurative phrase should be acquired from the context. The pro- gram consists of three components: phrasal parser, phrasal lex- icon, and phrasal acquisition module. 2.1 Phrasal Parser A lexical entry, a phrase, is a triple associating a linguistic pattern with its concept and a situation. A clause in the input text is parsed in three steps: (1) Matching the phrase pattern against the clause in the text. (2) Validating in the context the relations specified by the phrase situation. (3) If both (1) and (2) are successful then instantiating the phrase concept using variable bindings computed in (1) and (2). 107 For example, consider the sentence: $9: :Fred wanted to marry Sheila, but she ducked the issue for years. Finally he put her on the spot. The figurative phrase is parsed relative to the context esta- blished by the first sentence. Assume that the lexicon contains a single phrase, described informally as: phrase pattern: Personl put Person2 on the spot situation: Person2 avoids making a certain tough decision concept: Personl prompts Person2 to make that decision The steps in parsing the clause using this phrase are: (1) The pattern is matched successfully against the text. Consequently, Personl and person2 are bound to Fred and Sheila respectively. (2) The situation associated with the pattern is validated in the context. After reading the first phrase the context contains two concepts: (a) Fred wants to marry Sheila, and (b) she avoids a decision. The situation matches the input. (3) Since both (1) and (2) are successful, then thepattern it- self is instantiated, adding to the context: Fred prompted Sheila to make up her mind. Phrase situation, distinguished from phrase concept, is intro- duced in our representation, since it help solve three problems: (a) in disambiguation it provides a discrimination condition for phrase selection, (b) in generation it determines if the phrase is applicable, and (c) in acquisition it allows the incorporation of the input context as pan of the phrase. 2.2 Phrasal Lexicon RINA uses a declarative phrasal lexicon which is imple- mented through GATE [Mueller84] using unification [Kay79] as the grammatic mechanism. Below are some sample phrasal patterns. PI: ?x <lay down> <the law> P2: ?x throw <the book> <at ?y> These patterns actually stand for the slot fillers given below: PI: (subject ?x (class person)) (verb (root lay) (modifier down)) (object (determiner the)(noun law)) P2: (subject ?x (class person)) (verb (root throw)) (object ?z (marker at) (class person))) (object (determiner the)(noun book)) This notation is described in greater detail in [Zernik85b]. 2.3 Phrase Acquisition through Generalization and Refinement Phrases are acquired in a process of hypothesis formation and error correction. The program generates and refines hy- potheses about both the linguistic pattern, and the conceptual meaning of phrases. For example, in acquiring the phrase carry the water, RINA first uses the phrase already existing in the lexicon, but it is too general a pattern and does not make sense in the context. ?x carry:verb ?z:phys-obj <for ?y> Clearly, such a syntactic error stems from a conceptual error. Once corrected, the hypothesis is: ?x carry:verb <the water> <for ?y> The meaning of a phrase is constructed by identifying salient features in the context. Such features are given in terms of scripts, relationships, plan/goal situations and emotions. For example, carry the water is given in terms of agency goal situation (?x executes a plan for ?x) on the background of rivalry relationship (?x and ?y are opponents). Only by detecting these elements in the context can the program learn the meaning of the phrase. 3. Conceptual Representation The key for phrase acquisition is appropriate conceptual representation, which accounts for various aspects of phrase meanings. Consider the phrase to throw the book in the following paragraph: $2: The famous mobster avoided prosecution for years. Finally they threw the book at him for tax evasion. We analyze here the components in the representation of this phrase. 3.1 Scripts Basically, the figurative phrase depicts the trial script which is given below: (a) The prosecutor says his arguments to the judge (b) The defendant says his arguments to the judge (c) The judge determines the outcome, either: (I) to punish the defendant (2) not to punish the. defendant This script involves a Judge, a Defendant, and a Prosecutor, and it describes a sequence of events. Within the script, the phrase points to a single event, the decision to punish the de- fendant. However, this event presents only a rough approxi- mation of the real meaning which requires further refinement. (a) The phrase may be applied in situations that are more general than the trial script itself. For example: Sl0: When they caught him cheating in an exam for the third time, the dean of the school de- cided to throw the book at him. Although the context does not contain the specific trial script, the social authority which relates the judge and the defendant exists also between the dean and John. (b) The phrase in $2 asserts not only that the mobster was punished by the judge, but also that a certain prosecution strategy was applied against him. 108 3.2 Specific Plans and Goals In order to accommodate such knowledge, scripts incor- porate specific planning situations. For example, in prosecuting a person, there are three options, a basic rule and two devia- tions: (a) Basically, for each law violation, assign a penalty as prescribed in the book. (b) However, in order to loosen a prescribed penalty, mitigat- ing circumstances may be taken into account. (c) And on the other hand, in order to toughen a prescribed penalty, additional violations may be thrown in. In $2 the phrase conveys the concept that the mobster is pun- ished for tax evasion since they cannot prosecute him for his more serious crimes. It is the selection of this particular prosecution plan which is depicted by the phrase. The phrase representation is given below, phrase pattern ?x:person throw:verb <the book> <at ?y:person> situation ($trial (prosecution ?x) (defendant ?y)) concept (act (select-plan (actor prosecution) (plan(ulterior-crime (crime ?c) (crime-of ?y))))) (result (thwart-goal (goal ?g) (goal-of ?y))) where ulterior-crime is the third prosecution plan above. 3.3 Relationships The authority relationship [Schank78, Carbonel179] is per- vasive in phrase meanings, appearing in many domains: judge-defendant, teacher-student, employer-employee, parent- child, etc. The existence of authority creates certain expecta- tionsi if X presents an authority for Y, then: (a) X issues rules which Y has to follow. (b) Y is expected to follow these rules. (c) Y is expected to support goals of X. (d) X may punish Y if Y violates the rules in (a). (e) X cannot dictate actions of Y; X can only appeal to Y to act in a certain way. (,9 X can delegate his authority to Z which becomes an au- thority for Y. In S10, the dean of the school presents an authority for John. John violated the rules of the school and is punished by the dean. More phrases involving authority are given by the fol- lowing examples. 511: I thought that parking ticket was unfair so I took it up with the Judge. S12: My boss wanted us to stay in the office until 9pm every evening to finish the project on time. Everybody was upset, but nobody stood up to the boss. 513: Jenny's father lald down the law: no more late dates. The representation of the phrase take it up with, for exam- ple, is given below: phrase pattern ?x:person <take:verb up> ?z:problem <with ?y:person> situation (authority (high ?y) (low ?x)) concept (act (auth-appeal(actor ?x) (to ?y) (object ?z)) (purpose (act (auth-decree (actor ?y) (to ?x) (object ?z))) (result (support-plan (plan-of ?x)))) The underlying situation is an authority relationship between X and Y. The phrase implies that X appeals to Y so that Y will act in favor of X. 3.4 Abstract Planning Situations General planning situations, such as agency, agreement, goal-conflict and goal-coincidence [Wilensky83] are addressed in the examples below. S1: The Democrats in the house carried the water for Reagan in his tax-reform bill. The phrase in S1 is described using both rivalry and agency. In contrast to expectations stemming from rivalry, the actor serves as an agent in executing his opponent's plans. The representation of the phrase is given below: phrase pattern ?x:person carry:verb <the water ?z:plan> <for ?y:person> situation (rivalry (actorl ?x) (actor2 ?y)) concept (agency (agent ?x) (plan ?z) (plan-of ?y)) Many other phrases describe situations at the abstract goal/plan level. Consider $14: S14: I planned to do my CS20 project with Fred. I backed out of it when I heard that he had flunked CS20 twice in the past. Back out of depicts an agreed plan which is cancelled by one party in contradiction to expectations stemming from the agreement. S15: John' s strongest feature in arguing is his ability to fallbaekon his quick wit. Fall back on introduces a recovery of a goal through an al- ternative plan, in spite of a failure of the originally selected plan. 516: My standing in the tennis club deteriorated since I was bogged down wlth CS20 assignments the whole summer. In bog down, a goal competition over the actor's time exists between a major goal (tennis) and a minor goal (CS20). The major goal fails due to the efforts invested in the minor goal. 109 3.5 Emotions and Attitudes In text comprehension, emotions [Dyer83, Mueller85] and attitudes are accounted for in two ways: (a) they are generated by goal/planning situations, such as goal failure and goal achievement, and (b) they generate goals, and influence plan selection. Some examples of phrases involving emotions are given below. Humiliation is experienced by a person when other people achieve a goal which he falls to achieve. The phrase in S17 depicts humiliation which is caused when John reminds the speaker of his goal situation: S17: I failed my CS20 class. My friend John rubbed nlynose lnit by telling me that he got an A+. Resentment is experienced by a person when a certain goal of his is not being satisfied. This goal situation causes the execu- tion of plans by that person to deteriorate. The phrase in S18 depicts such an attitude: S18: Since clients started to complain about John, his boss asked him if he had a chip on his shoulder. Embarrassment is experienced by a person when his plan failure is revealed to other people. The phrase in S19, depicts embarrassment which is caused when a person is prompted to make up his mind between several bad options. 519: Ted Koppel put his guest on the spot when he asked him if he was ready to denounce appartheid in South Africa. In all the examples above, it is not the emotion itself which is conveyed by the phrase. Rather, the concept conveys a certain goal situation which causes that emotion. For example, in $20 (rub one' s nose) a person does something which causes the speaker to experience humiliation. 4. Learning Phrase Meanings Consider the situation when a new phrase is first encoun- tered by the program: User: The Democrats in the house carried the water for Reagan's tax-reform bill. RINA: They moved watery User: No. They carried the water for him. P~[NA: They helped him pass the bill. Three sources take pan in forming the new concept, (a) the linguistic clues, (b) the context, and (c) the metaphor. 4.1 The Context The context prior to reading the phrase includes two con- cepts: (a) Reagan has a goal of passing a law. (b) The Democrats are Reagan's rivals-they are expected to thwart his goals, his legislation in particular. These concepts provide the phrase situation which specifies the context required for the application of the phrase. 4.2 The Literal Interpretation The literal interpretation of carried the water as "moved water" does not make sense given the goal/plan situa- tion in the context. As a result, RINA generates the literal in- terpretation and awaits confirmation from the user. If the user repeats the utterance or generates a negation, then RINA gen- erates a number of utterances, based on the current context, in hypothesizing a novel phrase interpretation. 4.3 The Metaphor Since the action of moving water does not make sense literally, it is examined at the level of plans and goals: Moving water from location A to B is a low-level plan which supports other high-level plans (i.e., using the water in location B). Thus, at the goal/plan level, the phrase is perceived as: "they executed a low-level plan as his agents" (the agency is suggest- ed by the prepositional phrase: for his tax-reform bill; i.e., they did an act.for his goal). This is taken as the phrase concept. 4.4 The Constructed Meaning The new phrase contains three parts: (a) The phrase pattern is extracted from the example sen- tence: ?x carry:verb <the water> <for ?y> (b) The phrase situation is extracted from the underlying context: (rivalry (actorl ?x) (actor2 ?y)) (c) The phrase concept is taken from the metaphor: (plan-agency (actor ?x) (plan ?z) (plan-of ?y)) Thus, the phrase means that in a rivalry situation, an opponent served as an agent in carrying out a plan. 5. Future Work and Conclusions The phrasal approach elevates language processing from interaction among single words to interaction among entire phrases. Although it increases substantially the size of the lexi- con, this chunking simplifies the complexity of parsing since clauses in the text include fewer modules which interact in fewer ways. The phrasal approach does reduce the power of the program in handling non-standard uses of phrases. For ex- ample, consider the situation where a mobster kidnaps a judge, points the gun at him, and says: No funny book you could throw at me now would do you any good!*. Our current parser would certainly fail in matching the syntactic pattern and inferring the ironic meaning. The analysis of such a sen- tence would require that the program associate the two exist- ing phrases, the general throw something and the figurative throw the book, and make inferences about the pun meant by the mobster. Such examples show that it is difficult to capture human behavior through a single parsing paradigm. * This example is attributed to an anonymous referee. 110 Parsing text is a futile task unless it addresses the ultimate objective of language processing, namely mapping text into conceptual representation. To this end, we have shown the structure of a lexicon which provides the association between syntactic patterns with their semantic concepts. However, due to the huge size of the English language, not all phrases can be given at the outset. A parsing program is required to handle unknown phrases as they are encountered in the text. In RINA we have shown how new phrases can be acquired from exam- ples in context. Phrase acquisition from context raises questions regarding the volume of knowledge required for language processing. A phrase such as throw the book requires highly specialized knowledge involving sentencing strategies in court. Now, this is only one figurative phrase out of many. Thus, in order to handle figurative phrases in general, a program must ultimately have access to all the knowledge of a socially mature person. Fortunately, learning makes this problem more tractible. In the process of phrase acquisition, phrase meaning is elevated from the specific domain in which the phrase has originated to a lev- el of abstract goal situations. For example, once throw the book is understood as the act of authority-decree, then knowledge of the trial situation no longer needs to be accessed. The phrase is well comprehended in other domains: my boss threw the book at me, his parents threw the book at him, her teacher threw the book at her, etc. At that level, a finite number of goal situations can support the appli- cation of figurative phrases across a very large number of domains. [Becker75] [Carbonel179] [Dyer83] [Fillmore86] [Kay79] References Becker, Joseph D., "The Phrasal Lexi- con," pp. 70-73 in Proceedings Interdisci- plinary Workshop on Theoretical Issues in Natural Language Processing, Cambridge, Massachusets (June 1975). Carbonell, J. G., "Subjective Understand- ing: Computer Models of Belief Systems," TR-150, Yale, New Haven CT (1979). Ph.D. Dissertation. Dyer, Michael G., In-Depth Understand- ing: A Computer Model of Integrated Pro- cessing for Narrative Comprehension, MIT Press, Cambridge, MA (1983). Fillmore, C., P. Kay, and M. O'Connor, Regularity and Idiomaticity in Grammati- cal Constructions: The Case of Let alone, UC Berkeley, Department of Linguistics (1986). Unpublished Manuscript. Kay, Martin, "Functional Grammar," pp. 142-158 in Proceedings 5th Annual Meet- ing of the Berkeley Linguistic Society, Berkeley, California (1979). [Mueller84] [Mueller85] [Pawley83] [Schank78] [Wilensky83] [Wilensky84] [Zernik85a] [Zernik85b] [Zernik86] Mueller, E. and U. Zernik, "GATE Refer- ence Manual," UCLA-AI-84-5, Computer Science, AI Lab (1984). Mueller, E. and M. Dyer, "Daydreaming in Humans and Computers," in Proceed- ings 9th International Joint Conference on Artificial Intelligence, Los Angeles CA (1985). Pawley, A. and H. Syder, "Two Puzzles for Linguistic Theory: Nativelike Selection and Nativelike Fluency," in Language and Communication, ed. J. C. Richards R. W. Schmidt, Longman, London (1983). Schank, R. and J. Carbonell, "The Gettys- burg Address: Representing Social and Political Acts," TR-127, Yale University, Depatment of Computer Science, New Haven CT (1978). Wilensky, Robert, Planning and Under- standing, Addison-Wesley, Massachusetts (1983). Wilensky, R., Y. Arens, and D. Chin, "Talking to UNIX in English: an Over- view of UC," Communications of the ACM 27(6), pp.574-593 (June 1984). Zernik, Lift and Michael G. Dyer, "Learn- ing Phrases in Context," in Proceedings The 3rd Machine Learning Workshop, New-Brunswick NJ (June 1985). Zernik, Uri and Michael G. Dyer, "To- wards a Self-Extending Phrasal Lexicon," in Proceedings 23rd Annual Meeting of the Association for Computational Linguistics, Chicago IL (July 1985). Zernik, U. and M. G. Dyer, "Disambigua- tion and Acquisition using the Phrasal Lex- icon," in Proceedings llth International Conference on Computational Linguistics, Bonn Germany (1986). 111
1986
17
SEMANTICALLY SIGNIFICANT PATTERNS IN DICTIONARY DEFINITIONS * Judith Markowitz Computer Science Department De Paul University, Chicago, IL 60604 Thomas Ahlswede Marth~ Evens Computer Science Department Illinois Institute of Technology, Chicago, Ii 60616 ABSTRACT Natural language processing systems need large lexicons containing explicit information about lexical-semantlc relationships, selection restrictions, and verb categories. Because the labor involved in constructing such lexicons by hand is overwhelming, we have been trying to construct lexical entries automatically from information available in the machine-readable version of Webst@r's ~@ve~h Col!eglate Dictionary. This work is rich in implicit information; the problem is to make it explicit. This paper describes methods for finding taxonomy and set-membership relationships, recognizing nouns that ordinarily represent human beings, and identifying active and stative verbs and adjectives. INTRODUCTION Large natural language processing systems need lexicons much larger than those available today with explicit information about lexlcal-semantic re%ationships, about usage, about forms, about morphology, about case frames and selection restrictions and other kinds of collocational information. Apresyan, Mel'cuk, and Zholkovsky studied the kind of explicit lexical information needed by non-native speakers of a language. Their Explanatory-Combinatory Dictionary (1970) explains how each word is used and how it combines with others in phrases and sentences. Their dream has now been realized in a full-scale dictionary of Russian (Mel'cuk and Zholkovsky, 1985) and in example entries for French (Mel'cuk et al., 1984). Computer programs need still more explicit and detailed information. We have discussed elsewhere the kind of lexical information needed in a question answering system (Evens and Smith, 1978) and by a system to generate medical case reports (Li et al., 1985). This research was supported by the National Science Foundation under IST-85- 10069. A number of experiments have shown that relational thesauri can significantly improve the effectiveness of an information retrieval system (Fox, 1980; Evens et al., 1985; Wang et al., 1985). A relational thesaurus is used to add further terms to the lquery, terms that are related to the ~riglnal by lexlcal relations like synonymy, taxonomy, set-membership, or the part- whole relation, among others. The addition of these related terms enables the system to identify more relevant documents. The development of such relational thesauri would be comparatively simple if we had a large lexicon containing relational information. (A comparative study of lexical relations can be found in Evens et al., 1980). The work involved in developing a lexicon for a large subset of English is so overwhelming, that it seems appropriate to try to build a lexicon automatically by analyzing information in a machine-readable dictionary. A collegiate level dictionary contains an enormous amount of information about thousands of words in the natural language it describes. This information is presented in a form intended to be easily understood and used by a human being with at least some command of the language. Unfortunately, even when the dictionary has been transcribed into machine-readable form, the knowledge which a human user can acquire from the dictionary is not readily available to the computer. There have been a number of efforts to extract information from machine- readable dictionaries. Amsler (1980, 1981, 1982) and Amsler and John White (1979) mapped out the taxonomic hierarchies of nouns and verbs in the Merriam-Webster Pocket Dictionary. Michiels (1981, 1983) analyzed the Longman Dictionary of C0ntemporary Englis h (LDOCE), taking advantage of the fact that that dictionary was designed to some extent to facilitate computer manipulation. Smith (1981) studied the 112 "defining formulae" - significant recurring phrases - in a selection of adjective definitions from We bster[s Carolyn White (1983) has developed a program to create entries for Sager's Linguistic String Parser (1981) from WY. Chodorow and Byrd (1985) have extracted taxonomic hierarchies, associated wlth feature information, from LDOCE and W7. We have parsed W7 adjective definitions (Ahlswede, 1985b) using Sager's Linguistic String Parser (Sager, 1981) in order to automatically identify lexical-semantic relations associated with defining formulae. We have also (Ahlswede and Evens, 1983) identified defining formulae in noun, verb and adverb definitions from W7. At present we are working on three interrelated projects: identification and analysis of lexical-semantic -elations in or out of WY; generation of computed definitions for words which are used or referred to but not defined in WY; and parsing of the entire dictionary (or as much of it as possible) to generate from it a large general lexical knowledge base. This paper represents a continuation of our work on defining formulae in dictionary definitions, in particular definitions from W7. The patterns we deal with are limited to recurring phrases, such as '"any of a" or "a quality or state of" (common in noun definitions) and "of or relating to" (common in adjective definitions). From such phrases, we gain information not only about the words being defined but also about the words used in the definitions and other words in the lexicon. Specifically, we can extract selectional information, co-occurrence relations, and lexical-semantic relations. These methods of extracting information from W7 were designed for use in the lexicon builder described earlier by Ahlswede (1985a). The computational steps involved in this study were relatively simple. First W7 definitions were divided by part of speech into separate files for nouns, verbs, adjectives, and others. Then a separate Keyword In Context (KWIC) Index was made for each part of speech. Hypotheses were tried out initially on a subset of the dictionary containing only those words which appeared eight or more times in the Kucera and Francis corpus (1968) of a million words of running English text. Those that proved valid for this subset were then tested on the full dictionary. This work would have been impossible without the kind permission of the G. & C. Merriam Company to use the machine-readable version of W7 (Olney et al. 1967). NOUN TAXONOMY Noun definitions which begin with "Any" signal a taxonomic relationship between the noun being defined and a taxonomic superordinate which follows the word "Any." One subset of the formulae beginning with "Any" has the form: "Any"- NP, where the NP can be a noun, noun phrase, or a co-ordinated noun or adjective structure. la. alkyl any univalent aliphatic, aromatic-aliphatic, or alicyclic hydrocarbon radical. b. ammunition any material used in attack or defense. c. streptococcus any coccus in chains d. nectar any delicious drink e. discord any harsh or unpleasant sound f. milkwort any herb of a genus (Pol_ygala) of the family Polygalaceae, the milkwort family In these definitions the taxonomic superordinate of the noun being defined is the head noun of the NP immediately following "Any". The superordinate of "alkyl" is "radical," which is the head of the co-ordinated structure following "Any" whereas the superordinate of "ammunition" is the unmodified noun "material." Of the 97 examples of "Any"- NP only two failed to contain an overt taxonomic superordinate following "Any." 2a. week any seven consecutive days b. couple any two persons paired together In each of these cases there is an implicit taxonomic superordinate "set." The second frequently occurring subset of noun definitions containing "Any" begins with the following pattern: "Any of"-NP. This pattern has two principal realizations depending upon what immediately follows "Any of." In one sub-pattern a quantifier, numeric expression, or "the" follows the initial "Any of" and begins an NP which contains the superordinate of the noun being defined. This pattern is similar to that described above for the "Any"-NP formula. 113 3a. doctor any of several brightly colored artificial flies b. allomorph any of two or more distinct crystalline forms of the same substance. c. elder any of various church officers The other sub-pattern expresses a biological taxonomic relationship and has the following definition structure: "Any of a/an" <optional> modifier taxonomic level "("scientific name")" "of" taxonomic superordinate either attributes or taxonomic subordinate The modifier is optional and modifies the taxonomic level of the noun being defined; the capitalized scientific name of the level follows in parenthesis; the taxonomic superordinate can be a noun or a complex NP and is the object of the second "of" in the formula; and the information following the superordinate is generally a co-ordinated structure, frequently co-ordinated NPs. Of the 901 instances of the definition-initial "Any of a/an" sequence 853, or 95 per cent, were biological definitions. 4a. ant any of a family (Formicidae) of colonial hymenopterous insects with complex social organization and various castes performing special duties. b. grass any of a large family (Gramineae) of monocotyledonous mostly herbaceous plants with jointed stems, slender sheathing leaves, and flowers borne in spikelets of bracts. c. acarld any of an order (Acarina) of arachnids including mites and ticks. d. cercis any of a small genus (Cerci s) of leguminous shrubs or low trees. e. nematode any of a class or phylum (Nematoda) of elongated cylindrical worms parasitic in animals or plants or free-living in soil or water. f. archaeornis any of a genus (Archaeornis) of upper Jurassic toothed birds. The only sequences which break from the pattern described above are non- biological definitions, which do not have parenthetical information following the head noun of the NP following "Any of a/an" and biological definitions where that head noun is "breed." 5a. globulin any of a class of simple proteins (as myosin) insoluble in pure water but soluble in dilute salt solutions that occur widely in plant and animal tissues. b. rottweiler any of a breed of tall vigorous black short-haired cattle dogs. c. poland china any of an American breed of large white-marked black swine of the lard type. The definition for "globulin" illustrates that even when a non- biological definition has a parenthesis, that parenthetical information does not immediately follow the NP following "Any of a/an." The other definitions in (5) are instances of "breed" following "Any of a/an." In general, when a definition begins with "Any of a/an" it is almost certainly a biological definition and that certainty is increased if the "Any of a/an noun" is immediately followed by parenthesis unless the noun of the pattern is "breed." THE MEMBER-SET RELATION Another defining formula with an interesting resemblance to taxonomy also occurs in noun definitions. The pattern "A member of"-NP is similar to the basic organization of the "Any" definitions in that the immediate superordinate of the noun being defined is the object of the preposition "of" except in this pattern the relationship is, of course, member- set. 6a. hand a member of a ship's crew. b. earl a member of the third grade of the British peerage ranking below a marquess and above a viscount. c. Frank a member of a West Germanic people entering the Roman provinces in A.D. 253, occupying the Netherlands and most of Gaul, and establishing themselves along the Rhine. d. republican a member of a political 114 party advocating republicanism e. Fox a member of an Indian people formerly living in Wisconsin. f. Episcopalian a member of an episcopal church (as the Protestant Episcopal Church). g. friar a member of a mendicant order What we have here is a generic term for any member of the speci[led set. It Is perhaps best thought of as similar to the part-whole relation -- a hand is part of a crew, a Frank is part of a tribe, an earl is (somewhat inelegantly) part of a peerage. In our data the nouns being defined with this formula are invariably human. Of the 581 definitions which begin with "A member of" only nine define non-human nouns and two of those are anthropomorphic. 7a. Jotunn a member of a race of giants in Norse mythology b. Houyhnhnm a member of a race of horses endowed with reason in Swift's qu~li~!~ ~ Y ~ . Why is it important to mark nouns in a lexicon as explicitly human? Many verbs can take only human subjects or objects. Also, the choice between the relative pronouns Vb9 and which depends on whether the referent is human or not. The member-set relation needs to be distinguished from another relation that classifies a specific individual as in 8a. Circe sorceress who changed Odysseus' men into swine. GENERIC AGENTS Generic agents are the typical fillers of the agent argument sot for a given verb. They are particularly valuable in understanding intersentential references or generating them. One very surprising source of definitions for human nouns is the formula "One that." Of the 1419 examples of this pattern 694, or 49 per cent were verifiably human. That is, it was possible to determine from the definition itself or from associated definitions, such as a related verb, that the noun being defined was +human. This estimate is, therefore, conservative. It was also determined that a large portion of these definitions (30 per cent) were of occupations. 9a. goldbeater one that beats gold into gold leaf b. pollster one that conducts a poll or compiles data obtained by a poll. c. schoolmaster one that disciplines or directs. d. hatter one that makes, sells, or cleans and repairs hats. e. assassin one that murders either for hire or for fanatical motives. f. taxpayer one that pays or is liable to pay a tax g. teletyplst one that operates a teletypewriter. WHAT THE PARENTHESES TELL US The formula "one (..)" offers very different information. (This formula typically occurs somewhere in the middle of a definition, not at the beginning.) If the first word of the parenthetical information is not "as", a definition which begins with this pattern is a biological definition. The parenthetical material is the scientific name of the noun being defined. These definitions are sub-definitions and almost invariably follow "esp: ". lOa. pimpernel any of a genus (Anagallis) of herbs of the primrose family; e~P: one (A. aryensis) whose scarlet, white, or purplish flowers close at the approach of rainy or cloudy weather. b. whelk any of numerous large marine snails (as of the genus Buccinum); esp: one (B~ undatum) much used as food in Europe. c. turnip either of two biennial herbs of the mustard family with thick roots eaten as a vegetable or fed to stock, one (Brassic@ rapa) with hairy leaves and usu. flattened roots. d. capuchin any of a genus (~ebus) of So. American monkeys; esp one (C. capuc!nas) with the hair on its crown resembling a monk's cowl. e. croton any of a genus (Crot0n) of 115 herbs and shrubs of the spurge famil, one (C. @lute~ia) of the Bahamas yielding cascarilla bark. f. bully tree any of several tropical American trees of the Sapodillo family; es~ one (Manilkara bid entata) that yields balata gum and heavy red timber. SUFFIX DEFINITIONS The defining pattern "One...(... specific /such...)" is an interesting sequence which is only used to define suffixes. The words "specific" and "such" signal this while at the same time indicating what semantic information should be taken from the stem to which the suffix is affixed. 11a. -er one that is a suitable object of (a specified action). b. -ate one acted upon (in a specified way). c. -morph one having (such) a form. d. -path one suffering from (such) an ailment. e. -ant one that performs (a specified action). f. -grapher one that writes about (specified) material or in a (specified) way. Examples associated with some of the definitions in (10) are "isomorph," "psychopath," and "violinist." We are in the process of analyzing all instances of parenthetical "specified" and "such" to determine whether the defining formula exemplified by (10) is a general approach to the definition of affixes. Clearly, the use of parentheses is very significant, signalling an important semantic distinction. WHAT NOUN DEFINITIONS TELL US ABOUT VERBS Noun defining patterns can provide important information about specific verbs. Not surprisingly, one of these is the pattern "Act of Ving" which is an indicator of action verbs. Action verbs differ from statlve verbs in a number of important ways. Action verbs llke bite and Re rsuade can appear in imperative sentences, while statlve verbs like own and resemble cannot: Bite that man! Persuade him to go! *Own the house! *Resemble your father! Action verbs take the progressive aspect; stative verbs do not: She is biting the man. She is persuading him to go. *She is owning the house. *She is resembling your father. Action verbs can appear in a number of embedded sentences where statives cannot be used. I told her to bite the man. *I told her to own the house. In definitions the action verb appears as the gerundive object of the preposition "of" or as the present-tense verb of the subordinate clause. 12a. plumbing the act of using a plumb. b. forgiveness the act of forgiving. c. soliloquy the act of talking to oneself. d. projection the act of throwing or shooting forward. e. refund the act of refunding f. protrusion the act of protruding. g. investiture the act of ratifying or establishing in office. The examples in (11) indicate that the related verb is not always morphologically related. This pattern could, therefore, be used as a means of accessing semantically related verbs and nouns or as a tool for the construction of a semantic network. "The act of Ving" definitions have a subpattern which consists of "The act of Ving or the state of being <adj>." There are not many examples of this subpattern, but in all but one instance the noun being defined, the verb and the adjective are morphologically related. 13a. adornment the act of adorning or the state of being adorned. b. popularization the act of popularizing or the state of being popularized c. nourishment the act of nourishing or the state of being nourished. 116 d. intrusion the act of intruding or the state of being intruded. e. embodiment the act of embodying or the state of being embodied. In contrast, our data do not support the use of the corresponding formula "The state of being"-past part. for identifying stative verbs, Many instances of this pattern appear to be passives or stative use of normally non- stative verbs. This position is supported by the presence of a fair number of definitions which conjoin the two formulae. 14a. displacement the act or process of displacing: the state of being displaced. b. examination the act or process of examining: the state of being examined. c. expansion the act or process of expanding. The quality or state of being expanded. It is likely that the formula "The quality or state of being"-past part. is a stative verb indicator when it does not co-occur with "Act of" definitions. Support comes from the frequency with which that pattern alternates adjectives, which are normally stative, with the past participle. SELECTIONAL INFORMATION FOR VERB DEFINITIONS Although the structure of verb definitions is much more limited than that of noun definitions, elements of verb definitions do provide interesting insights into collocatlonal information. One striking example of this is the use of parenthetical information which flags typical instantiations of case arguments for the verb being defined. The most consistent of these patterns is "To"-V- (<"as">NP) where the NP is the typical object of the verb being defined. 15a. mount to put or have (as artillery) in position. b. lay to bring forth and deposit (an egg). c. develop to subject (exposed phetographic material) to a usu. chemical treatment... We are in the process of determining how consistent the parenthetical "as" is in signalling typical case relations. SELECTIONAL INFORMATION FOR ADJECTIVES Adjective definitions differ from those of nouns and verbs in that while nouns are virtually always defined in terms of other nouns and verbs in terms of other verbs, only about 10 percent of adjectives are defined in terms of other adjectives -- the rest are related to nouns or sometimes to verbs. Furthermore, the semantic information in an adjective definition refers more to the noun (or type of noun) modified by the adjective than it does to the adjective itself. This is because an adjective, together with the noun it modifies, defines a taxonomic relationship -- or, to put it another way, denotes a feature of the thing defined in the adjective+noun phrase. For instance, we can say either that the phrase "big dog" denotes a particular kind of (the more general term) "dog"; or that it denotes a dog with the additional feature of "bigness". A useful piece of information we would like to get from adjective definitions is selectional information -- what sort of noun the adjective can meaningfully modify. Selectional restrictions are harder to find and are largely negative - - for instance, the formula "containing" defines adjectives that do not (in the sense so defined) modify animate nouns. 10a. basic containing relatively little silica. b. normal containing neither basic hydroxyl nor acid hydrogen. The same is true of some other moderately common formulae, such as "consisting of", "extending" and "causing". We hope that further analysis will allow us to find more indications of selectional characteristics of adjectives. RECOGNIZING ACTION VS. STATIVE ADJECTIVES One property belonging more to adjectives themselves than to their associated nouns is an active-stative distinction similar to that found in verbs. The test for an "active" adjective is that one may use it in a statement of the form "they are being --- -" or in the command "be .... ]" e.g. "be aggressive!" or "be good!", but not *"be tall!" or *"be ballistic!" As these examples indicate, most adjectives that can be used actively can also be used 117 statively -- aggressiveness or goodness may be thought of as a state rather than as an action -- but not the other way around. Contrary to our expectations, the active-stative parameter of adjectives is much easier to identify in definitions than is selectlonal information. Some of the defining formulae discussed in Smith (1981) and Ahlswede (1985b) seem to be limited to statlve adjectives. "Of or relating to", one of the most common, is one of these: fla. ballistic of or relating to ballistics or to a body in motion according to the laws of ballistics. b. literary of or relating to books. Although many adjectives defined with "of or relating to" can be used actively in other senses, they are strictly stative in the senses where this formula is used: 12a. civil of or relating to citizens <~ liberties>. b. peaceful of or relating to a state or time of peace. The common formula "being ...", on the other hand, defines adjectives which at least lean toward the action end of the spectrum: 13a. natural being in accordance with or determined by nature. b. cursed being under or deserving a curse. Even such a normally stative adjective as "liquid" is relatively active in one of its senses: 14a. liquid being musical and free of harshness in sound. By no means all formulae give indications of the stative-active qualities of an adjective. A large family of formulae ("having", "characterized by", "marked by", etc.) denoting attribution, are completely neutral with respect to this parameter. SUMMARY W7 contains a wealth of implicit information. We have presented methods for making some of this information explicit by focussing on specific formulae found in noun, verb, and adjective definitions. Most of these formulae appear at the start of definitions, but we have also demonstrated that important information can be extracted from syntactic and graphemic elements, such as parentheticals. The information we have extracted involves lexical relationships such as taxonomy and set membership, selectional restrictions, and special subcategories of nouns, verbs, and adjectives. This information is used by an automatic lexicon builder to create lexical entries automatically from W7 definitions. REFERENCES Ahiswede, Thomas. 1985a. "A Tool Kit for Lexicon Building," Proceedings of the 23rd Annual Meeting of the ACL, Chicago, pp. 268-278. Ahlswede, Thomas. 1985b. "A Linguistic String Grammar for Adjective Definitions," in S. Williams, ed., Humansand Machines: The Interface through Language. Ablex, Norwood, NJ, pp. 101-127. Ahlswede, Thomas and Martha Evens. 1983. "Generating a Relational Lexicon from a Machine-Readable Dictionary." Forthcoming. Amsler, Robert. 1980. The Structure of the Merriam-Webster Pocket Dictionary. Ph.D. Dissertation, Computer Science, University of Texas, Austin. Amsler, Robert. 1981. "A Taxonomy for English Nouns and Verbs." Proceedings of the 19th Annual Meeting of the ACL, Stanford, pp. 133-138. Amsler, Robert. 1982. "Computational Lexicology: A Research Program." Proceedings of the National Computer Conference, AFIPS, pp. 657-663. Amsler, Robert and John White, 1979. Development of a Computational Methodology for Deriving Natural Language Semantic Structures via Analysis of Machine-Readable Dictionaries. TR MCS77- 01315, Linguistics Research Center, University of Texas. Apresyan, Yuri, Igor Mel'cuk, and Alexander Zholkovsky. 1970. "Semantics and Lexicography: Towards a New Type of Unilingual Dictionary," in F. Kiefer, ed., Studies in Syntax and Semantics, Reidel, Dordrecht, Holland, pp. 1-33. Chodorow, Martin and Roy Byrd, 1985. "Extracting Semantic Hierarchies from a 118 Large On-Line Dictionary." Proceedings of the 23rd Annual Meeting of the ACL, pp. 299-304. Evens, Martha and Raoul Smith. 1978. "A Lexicon for a Computer Question-Answering System", American Journal of Computational Linguistics, No. 4, pp. 1- 96. Evens, Martha, Bonnie Litowitz, Judith Markowitz, Raoul Smith, and Oswald Werner. 1980. Lexical-Semantic Relations: a Comparative Survey, Linguistic Research, Inc., Edmonton, Alberta, 1980. Evens, Martha, James Vandendorpe, and Yih-Chen Wang. 1985. "Lexical-Semantic Relations in Information Retrieval", in S. Williams, ed., Humans and Machines. Ablex, Norwood, New Jersey, pp. 73-100. Fox, Edward. 1980. "Lexical Relations: Enhancing Effectiveness of Information Retrieval Systems," ACM SIGIR Forum, 15, 3, pp. 5-36. Kucera, Henry, and Nelson Francis. 1967. Computational Analysis of Present-Day American English, Brown University Press, Providence, Rhode Island. Li, Ping-Yang, Thomas Ahlswede, Carol Curt, Martha Evens, and Daniel Hier. 1985. "A Text Generation Module for a Decision Support System for Stroke", Proc. 1985 Conference on Intelligent Systems and Machines, Rochester, Michigan, April. Mel'cuk, Igor, and Alexander Zholkovsky. 1985. Explanatory-Combinatory Dictionary of Russian, Wiener Slawisticher Almanach, Vienna. Mel'cuk, Igor, Nadia Arbatchewsky- Jumarie, Leo Elnitzky, Lidia Iordanskaya, and Adele Lessard. 1984. Dictlonnalre Expllcatif et Combinatoire du Francais Contemporaln, Presses de l"Universite de Montreal, Montreal. Michiels, A., 1981. Exploiting a Large Dictionary Data Base. Ph.D. Thesis, University of Liege, Belgium. Michiels, A., 1983. "Automatic Analysis of Texts." Workshop on Machine Readable Dictionaries, SRI, Menlo Park, Ca. Olney, John, Carter Revard, and Paul Zeff. 1967. "Processor for Machine- Readable Version of Webster's Seventh at System Development Corporation." The Finite String, 4.3, pp. 1-2. Sager, Naomi. 1981. Natural Language Information Processing. Addison-Wesley, New York. Smith, Raoul. 1981. "On Defining Adjectives, Part III." In Dictionaries: Journal of the Dictionary Society of North America, no. 3, pp. 28-38. Wang, Yih-Chen, James Vandendorpe, and Martha Evens. 1985. "Relational Thesauri in Information Retrieval", JASIS, Vol. 36, No. i, pp. 15-27. Webster's Seventh New Collegiate Dictionary, 1963. G.aC. Merriam Company, Springfield, Massachusetts. White, Carolyn. 1983. "The Linguistic String Project Dictionary for Automatic Text Analysis," Workshop on Machine- Readable Dictionaries, SRI, April. 119
1986
18
COMPUTER METHODS FOR MORPHOLOGICAL ANALYSIS Roy J. Byrd, Judith L. Klavans I.B.M. Thomas J. Watson Research Center Yorktown Heights, New York 10598 Mark Aronoff, Frank Anshen SUNY / Stony Brook Stony Brook, New York 11794 1. Introduction This paper describes our current research on the prop- erties of derivational affixation in English. Our research arises from a more general research project, the Lexical Systems project at the IBM Thomas J. Watson Research laboratories, the goal for which is to build a variety of computerized dictionary systems for use both by people and by computer programs. An important sub-goal is to build reliable and robust word recognition mechanisms for these dictionaries. One of the more important issues in word recognition for all morphologically complex languages involves mechanisms for dealing with affixes. Two complementary motivations underlie our research on derivational morphology. On the one hand, our goal is to discover linguistically significant generalizations and principles governing the attachment of affixes to English words to form other words. If we can find such generalizations, then we can use them to build our ~m- proved word recognizer. We will be better able to cor- rectly recognize and analyse well-formed words and, on the other hand, to reject ill-formed words. On the other hand, we want to use our existing word-recognition and analysis programs as tools for gathering further infor- mation about English affixation. This circular process allows us to test and refine our emerging word recogni- tion logic while at the same time providing a large amount of data for linguistic analysis. It is important to note that, while doing derivational morphology is not the only way to deal with complex words in a computerized dictionary, it offers certain ad- vantages. It allows systems to deal with coinages, a possibility which is not open to most systems. Systems which do no morphology and even those which handle primarily inflectional affixation (such as Winograd (1971) and Koskenniemi (1983)) are limited by the fixed size of their lists of stored words. Koskenniemi claims that his two-level morphology framework can handle derivational affixation, although his examples are all of inflectional processes. It is not clear how that framework accounts for the variety of phenomena that we observe in English derivational morphology. Morphological analysis also provides an additional source of lexical information about words, since a word's properties can often be predicted from its structure. In this respect, our dictionaries are distinguished from those of Allen (1976) where complex words are merely analysed as concatenations of word-parts and Cercone (1974) where word structure is not exploited, even though derivational affixes are analysed. Our morphological analysis system was conceived within the linguistic framework of word-based morphology, as described in Aronoff (1976). In our dictionaries, we store a large number of words, together with associated idiosyncratic information. The retrieval mechanism contains a grammar of derivational (and inflectional) affixation which is used to analyse input strings in terms of the stored words. The mechanism handles both pre- fixes and suffixes. The framework and mechanism are described in Byrd (1983a). Crucially, in our system, the attachment of an affix to a base word is conditioned on the properties of the base word. The purpose of our re- search is to determine the precise nature of those condi- tions. These conditions may refer to syntactic, semantic, etymological, morphological or phonological properties. (See Byrd (1983b)). Our research is of interest to two related audiences: both computational linguists and theoretical linguists. Com- putational linguists will find here a powerful set of pro- 120 grams for processing natural language material. Furthermore, they should welcome the improvements to those programs' capabilities offered by our linguistic re- suits. Theoretical linguists, on the other hand, will find a novel set of tools and data sources for morphological research. The generalizations that result from our ana- lyses should be welcome additions to linguistic theory. 2. Approach and Tools Our approach to computer-aided morphological re- search is to analyse a large number of English words in terms of a somewhat smaller list of monomorphemic base words. For each morphologically complex word on the original list which can be analysed down to one of our bases, we obtain a structure which shows the af- fixes and marks the parts-of-speech of the components. Thus, for beautification, we obtain the structure <<<beauty>N +ify>V +ion>N. In this structure, the noun beauty is the ultimate base and +ify and +ion are the affixes. After analysis, we obtain, for each base, a list of all words derived from it, together with their morphological structures. We then study these lists and the patterns of affixation they exemplify, seeking generalizations. Section 3 will give an expanded description of the ap- proach together with a detailed account of one of the studies. We have two classes of tools: word lists and computer programs. There are basically four word lists. 1. The Kucera and Francis (K&F) word list, from Kucera and Francis (1967), contains 50,000 words listed in order of frequency of occurrence. 2. The BASE WORD LIST consists of approximately 3,000 monomorphemic words. It was drawn from the top of the K&F list by the GETBASES proce- dure described below. 3. The UDICT word list consists of about 63,000 words, drawn mainly from Merriam (1963). The UDICT program, described below, uses this list in conjunction with our word grammar to produce morphological analyses of input words. The UDICT word list is a superset of the base word list; for each word, it contains the major category as well as other grammatical information. 4. The "complete" word list consists of approximately one quarter million words drawn from an international-sized dictionary. Each entry on this list is a single orthographic word, with no additional information. These are the words which are morphologically analysed down to the bases on our base list. 5. We have prepared reverse spelling word lists based on each of the other lists. A particularly useful tool has been a group of reverse lists derived from Merriam(1963) and separated by major category. These lists provide ready access to sets of words having the same suffix. Our computer programs include the following. 1. UDICT. This is a general purpose dictionary access system intended for use by computer programs. (The UDICT program was originally developed for the EPISTLE text-critiquing system, as described in Heidorn, et al. (1982).) It contains, among other things, the morphological analysis logic and the word grammar that we use to produce the word structures previously described. 2. GETBASES. This program produces a list of monomorphemic words from the original K&F fre- quency lists. Basically, it operates by invoking UDICT for each word. The output consists of words which are morphologically simple, and the bases of morphologically complex words. (Among other things, this allows us to handle the fact that the original K&F lists are not lemmatised.) The re- sulting list, with duplicates removed, is our "base list". 3. ANALYSE. ANALYSE takes each entry from the complete word list. It invokes the UDICT program to give a morphological analysis for that word. Any word whose ultimate base is in the base list is con- sidered a derived word. For each word from the base list, the final result is a list of pairs consisting of [derived-word, structure] The data produced by ANALYSE is further processed by the next four programs. 4. ANALYSES. This program allows us to inspect the set of [derived-word,structure] pairs associated with any word in the base list. For example, its output for the word beauty is shown in Figure 1. In the 121 beautied <<*>N +ed>A beautification <<<*>N +ify>V +ion>N beautifier <<<*>N +ify>V #er>N beautiful <<*>N #ful>A beautifully <<<*>N #ful>A -ly>D beautifulness <<<*>N #ful>A #ness>N beautify <<*>N +ify>V unbeautified <un# <<<*>N +ify>V +ed>A>A unbeautified <un# <<<*>N +ify>V -ed1>V>V unbeautiful <un# <<*>N #ful>A>A unbeautifully <<un# <<*>N #ful>A>A -ly>D unbeautifulness <<un# <<*>N #ful>A>A #ness>N unbeautify <un# <<*>N +ify>V>V rebeautify <re# <<*>N +ify>V>V Figure 1. ANALYSES Output. structures, an asterisk represents the ultimate base beauty. 5. SASDS. This program produces 3 binary matrices indicating which bases take which single affixes to form another word. One matrix is produced for each of the major categories: nouns, adjectives, and verbs. More detail on the contents and use of these matrices is given in Section 3. 6. MORPH. This program uses the matrices created by SASDS to list bases that accept one or more given affixes. 7. SAS. (SAS is a trademark of the SAS Institute, Inc., Cary, North Carolina.) This is a set of statistical analysis programs which can be used to analyse the matrices produced by SASDS. 8. WordSmith. This is an on-line dictionary system, developed at IBM, that provides fast and convenient reference to a variety of types of dictionary infor- mation. The WordSmith functions of most use in our current research are the REVERSE dimension (for listing words that end the same way), the WEBSTER7 application (for checking the defi- nitions of words we don't know), and the UDED application (for checking and revising the contents of the UDICT word list). 3. Detailed Methods Our research can be conveniently described as a two stage process. During the first stage, we endeavored to produce a list of morphologically active base words from which other English words can be derived by affixation. The term "morphologically active" means that a word can potentially serve as the base of a large number of affixed derivatives. Having such words is important for stage two, where patterns of affixation become more obvious when we have more instances of bases that ex- hibit them. We conjectured that words which were fre- quent in the language have a higher likelihood of participating in word-formation processes, so we began our search with the 6,000 most frequent words in the K&F word list. The GETBASES program segregated these words into two categories: morphologically simple words (i.e., those for which UDICT produced a structure containing no affixes) and morphologically complex words. At the same time, GETBASES discarded words that were not morphologically interesting; these included proper nouns, words not belonging to the major categories, and non-lemma forms of irregular words. (For example, the past participle done does not take affixes, although its lemma do will accept #able as in doable) GETBASES next considered the ultimate bases of the morphologically complex words. Any base which did not also appear in the K&F word list was discarded. The remaining bases were added to the original list of morphologically simple words. After removing dupli- cates, we obtained a list of approximately 3,000 very frequent bases which we conjectured were morphologically active. Development of the GETBASES program was an itera- tive process. The primary type of change made at each iteration was to correct and improve the UDICT gram- mar and morphological analysis mechanism. Because the constraints on the output of GETBASES were clear (and because it was obvious when we failed to meet them), the creation of GETBASES proved to be a very effective way to guide improvements to UDICT. The more important of these improvements are discussed in Section 4.3. For stage two of our project, we used ANALYSE to process the "complete" word list, as described in Section 2. That is, for each word, UDICT was asked to produce a morphological analysis. Whenever the ultimate base for one of the (morphologically complex) words ap- peared on our list of 3,000 bases, the derived word and its structure were added to the list of such pairs'associ- ated with that base. ANALYSE yielded, therefore, a list of 3,000 sublists of [word,structure] pairs, with each sublist named by one of our base words. We called this result BASELIST. 122 NOUNS # + + ++++#h +++a+++e+i i oo fo a a a r c e e r i f z r u u o 1 n ryyd nycyeys Id ###a o ##11smnv iieihboe ssskiinr hmsep### anchor ancient angel animal annual anode anonym answer anxiety apartment apprentice 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 l O 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 00110000 01000000 00010000 11000010 00000000 00000000 00000000 00100011 O0OOOO01 00000000 O0OO1O0O ADJECTIVES i U # n o n +++#n tnvps d + + i i i i e i e o e r u u e c e f t z s s n r n r e b n r ynyye h s # # ## ## ## faint fair fall false familiar family fancy fast fat favorite federal feeling fell fellow female festival 000001100010010 000001100000010 000000010010001 000000100100010 000110110011010 000001001100100 000000010010010 010000000000000 010001100110000 000000000101010 000010100101010 000000100000011 010000100000000 000000000000011 000110100000010 000000000001010 VERBS i U + # n o n + a + ++++ #m tmv p s d a n a + + i i o u # i e d e e i e r r u u e bc neeovu ren nan r s ree bn r 1 etdenesergt###+###### study 1001000001000000111001 stuff 0001000001100000101011 style 0000000000000000001100 subject 1001011000000000011010 submarine 0000000001000000000000 submit 0100010001000000011000 substitute 1001011001100000011000 succeed 1000000001100000001000 sue 1000000001100100001000 suffer 1100000001100000011000 sugar 0001000001001000000000 suggest 1000011001110000011000 suit 1000000000100000001011 Figure 2. The NOUNS, ADJECTIVES, and VERBS matrices froln SASDS. 123 Our first in-depth study of this material involved the process of adding a single affix to a base word to form another word. By applying SASDS to BASELIST, we obtained 3 matrices showing for each base which affixes it did and did not accept. The noun matrix contained 1900 bases; the adjective matrix contained 850 bases; and the verb matrix contained 1600 bases. (Since the original list of bases contained words belonging to mul- tiple major categories, these counts add up to more than 3,000. The ANALYSE program used the part-of- speech assignments from UDICT to disambiguate such homographs.) Figure 2 contains samples taken from the noun, adjec- tive, and verb matrices. For each matrix, the horizontal axis shows the complete list of affixes (for that part-of- speech) covered in our study. The vertical axes give contiguous samples of our ultimate bases. Our results are by no means perfect. Some of our mis- analyses come about because of missing constraints in our grammar. The process of correcting these errors is discussed in Section 4. Sometimes there are genuine ambiguities, as with the words refuse (<re# <fuse>V>V) and preserve (<pre# <serve>V>V). In the absence of in- formation about how an input word is pronounced or what it means, it is difficult to imagine how our analyser can avoid producing the structures shown. Some of our problems are caused by the fact that the complete word list is alternately too large and not large enough. It includes the word artal, (plural of rod, a Middle Eastern unit of weight) which our rules dutifully, if incorrectly, analyse as <<art>N +al >A. Yet it fail~ to include angelhood, even though angel bears the [+hu- man] feature that #hood seems to require. Despite such errors, however, most of the analyses in these matrices are correct and provide a useful basis for our analytical work. We employed a variety of tech- niques to examine these matrices, and the BASELIST. Our primary approach was to use SAS, MORPH, and ANALYSES to suggest hypotheses about affix attach- ment. We then used MORPH, WordSmith, and UDICT (via changes to the grammar) to test and verify those hypotheses. Hypotheses which have so far survived our tests and our skepticism are given in Section 4. 4. Results Using the mcthods described, we have produced, results which enhance our understanding of morphological processes, and have produced improvements in the morphological analysis system. We present here some of what we have already learned. Continued research using our approach and data will yield further results. 4.1 Methodological Results It is significant that we were able to perform this re- search with generally available materials. With the ex- ception of the K&F word frequency list, our word lists were obtained from commercially available dictionaries. This work forms a natural accompaniment to another Lexical Systems project, reported in Chodorow, et al. (1985), in which semantic information is extracted from commercial dictioriaries. As the morphology project identifies lexical information that is relevant, variations of the semantic extraction methods may be used to populate the dictionary with that information. As has already been pointed out, our rules leave a resi- due of mis-analysed words, which shows up (for exam- ple) as errors in our matrices. Although we can never eliminate this residue, we can reduce its size by intro- ducing additional constraints into our grammar as we discover them. For example, chicken was mis-analysed as <<chi c>A +en>V. As we show in greater detail below, we now know that the +en suffix requires a [+Germanic] base; since chic is [-Germanic[, we can avoid the mis-analysis. Similarly we can avoid analysing legal as <<leg>N +al>A by observing that +al requires a [-Germanic] base while leg is [+Germanic]. Finally, we now have several ways to avoid the mis-analysis of maize as <<ma>N +ize>V, including the observation that +ize does not accept monosyllabic bases. We don't ex- pect, however, to find a constraint that will deal cor- rectly with words like artal. In the introduction, we pointed out that one of our goals was to build a system which can handle coinages. With respect to the 63,000-word UDICT word list, the quarter-million-word complete word list can be viewed as consisting mostly of coinages. The fact that our ana- lyser has been largely successful at analysing the words on the complete word list means that we are close to meeting our goal. What remains is to exploit our re- search results in order to reduce our mis-analysed resi- due as much as possible. 124 4. 2 Linguistic Results Linguistically significant generalizations that have re- sulted so far can be encoded in the form of conditions and assertions in our word formation rule grammar (see Byrd (1983a)). They typically constrain interactions between specific affixes and particular groups of words. The linguistic constraints fall into at least three catego- ries: (1) syllabic structure of the base word; (2) phonemic nature of the final segment of the base word; and (3) etymology of the base word, both derived and underived. Each of these is covered below. Some of these constraints have been informally observed by other researchers, but some have not. Constraints on the Syllabic structure of the base word. It is commonly known that the length of a base word can affect an inflectional process such as comparative for- mation in English. One can distinguish between short and long words where [+short] indicates two or fewer syllables and [+long] indicates two or more syllables. For example, a word such as big which is [+short] can take the affixes -er and -est. In contrast, words which are [-short] cannot, cf. possible, *possibler, *possiblest. (There are additional constraints on comparative for- mation, which we will not go into here. We give here only the simplified version.) We have found that other suffixes appear to require the feature [+short]. For ex- ample, nouns that take the suffix #ish tend to be [+short]. The actual results of our analysis show that no words of four syllables took #ish and only seven words of three syllables took #ish. In contrast, a total of 221 one and two syllable words took this suffix. The suffix thus preferred one syllable words over two sylla- ble words by a factor of four (178 one syllable words over 43 two syllable words). Compare boy~boyish with mimeograph/mimeographish. This is not to say that a word like mimeographish is necessarily ill-formed, but that it is less likely to occur, and in fact did not occur in a list like Merriam (1963). Two other suffixes also appear to select for number of syllables in the base word. In this case the denominal verb suffixes +ize and +ify are nearly in complementary distribution. Our data show that of the approximately 200 bases which take +ize, only seven are monosyllabic. Compare this with the suffix +tfy which selects for about 100 bases, of which only one is trisyllabic and 17 are disyllabic. Thus, +t.£v tends to select for [+short] bases while +ize tends to select for [+long] ones. As with #ish, there appears to be motivation for syllabic structure constraints on morphological rules. In the case of +ize and +ify it appears that the syllabic structure of the suffix interacts with the syllabic struc- ture of the base. Informally, the longer suffix selects for a [+short] base, and the shorter suffix selects for a [+long] base. Our speculation is that this may be related to the notion of optimal target metrical structure as dis- cussed in Hayes (1984). This notion, however, is the subject of future research. The Final Segment of the Base Word. The phonemic na- ture of the final segment appears to affect the propensity of a base to take an affix. Consider the fact that there occurred some 48 +ary adjectives derived from nouns in our data. Of these, 46 are formed from bases ending with alveolars. The category alveolar includes the phonemes /t/, /d/, /n/, /s/, /z/, and/1/. The two exceptions are customary and palmary. Again, in a word recognizer, if a base does not end in one of these phonemes, then it is not likely to be able to serve as the base of +ary. We have also found that the ual spelling of the +al suffix prefers a preceding alveolar, such as gradual, sexual, habitual. Another result related to the alveolar requirement is an even more stringent requirement of the nominalizing suffix +ity. Of the approximately 150 nouns taking +ity, only three end in the phoneme /t/ (chastity, sacrosanctity, and vastity). In addition the adjectivizer +cy seems also to attach primarily to bases ending in /t/. The exceptions are normalcy and supremacy. Etymology of the Base Word. The feature [+Germanic] is said to be of critical importance in the analysis of English morphology (Chomsky and Halle 1968, Marchand 1969). In two cases our data show this to be true. The suffix +en, which creates verbs from adjec- tives, as in moist~moisten, yielded a total of fifty-five correct analyses. Of these, forty-three appear in Merriam (1963), and of these forty-one are of Germanic origin. The remaining two are quieten and neaten. The former is found only in some dialects. It is clear that +en verbs aI'e:oyerwhelmingly formed on [+Germanic] bases. The feature [Germanic] is also significant with +al ad- jectives. In contrast to the +en stfffix, +al selects for the feature [-Germanic]. In our data, there were some 125 two hundred and seventy two words analysed as adjec- tives derived from nouns by +al suffixation. Of the base words which appear in Merriam (1963), only one, bridal, is of Germanic origin. However, interestingly, it turns out that the analysis <<bride>N +al >A is spurious, since bridal is the reflex of an Old English form brydealu, a noun referring to the wedding feast. The adjective bridal is not derived from bride. Rather it was zero-derived historically from the nominal form. Finally, other findings from our analysis show that no words formed with the Anglo-Saxon prefixes a+, be+ or for+ will negate with the Latinate prefixes non# or in#. This supports the findings of Marchand (1969). Observe that in these examples, the constraint applies between affixes, rather than between an affix and a base. The addition of an affix thus creates a new com- plex lexical item, complete with additional properties which can constrain further affixation. In sum, our sample findings suggest a number of new constraints on morphological rules. In addition we pro- vide evidence and support for the observations of others. 4.3 Improvements to the Implementation In addition to using our linguistic results to change the grammar, we have also made a variety of improvements to UDICT's morphological analyser which interprets that grammar. Some have been for our own conven- ience, such as streamlining the procedures for changing and compiling the grammar. Two of the improvements, however, result directly from the analysis of our word lists and files. These improvements represent gener- alizations over classes of affixes. First, we observed that, with the exception of be, do, and go, no base spelled with fewer than three characters ever takes an affix. Adding code to the analyser to restrict the size of bases has had an important effect in avoiding spurious analyses. A more substantial result is that we have added to UDICT a comprehensive set of English spelling rules which make the right spelling adjustments to the base of a suffix virtually all of the time. These rules, for ex- ample, know when and when not to double final conso- nants, when to retain silent e preceding a suffix beginning with a vowel, and when to add k to a base ending in c. These rules are a critical aspect Of UDICT's ability to robustly handle normal English input and to avoid misanalyses. 5. Further Analyses and Plans When we have modified our grammar to incorporate re- suits we have obtained, and added the necessary sup- porting features and attributes to the words in UDICT's word list, we will re-run our programs to produce files based on the corrected analyses that we will obtain. These files will, in turn, be used for further analysis in the Lexical Systems project, and by other researchers. We plan to continue our work by looking for more con- straints on affixation. A reasonable, if ambitious, goal is to achieve a word formation rule grammar which is "tight" enough to allow us to reliably generate words using derivational affixation. Such a capability would be important, for example, in a translation application where idiomaticness often requires that a translated concept appear with a different part-of-speech than in the source language. Further research will investigate patterns of multiple affixation. Are there any interdependencies among af- fixes when more than one appear in a given word? If so, what are they? One important question in this area has to do with violations of the Affix Ordering Generaliza- tion (Siegel (1974)), sometimes known as "bracketing paradoxes". A related issue which emerged during our work concerns prefixes, such as pre# and over#, which apparently ignore the category of their bases. It may be that recursive ap- plication of prefixes and suffixes is not the best way to account for such prefixes. We would like to use our data to address this question. Our data can also be used to investigate the morphological behavior of words which are "zero- derived" or "drifted" from a different major category. Such words are the nouns considerable, accused, and be- yond listed in Merriam(1967). Contrary to our goal for GETBASES (to produce a list of morphologically active bases), these words never served as the base for deriva- tional affixation in our data. We conjecture that some mechanism in the grammar prevents them from doing so, and plan to investigate the nature of that mechanism. Obtaining results from investigations of this type will not only be important for producing a robust word analysis system, it will also significantly contribute to our the- oretical understanding of morphological phenomena. 126 Acknowledgments We are grateful to Mary Neff and Martin Chodorow, both members of the Lexical Systems project, for ongo- ing comments on this research. We also thank Paul Cohen for advice on general lexicographic matters and Paul Tukey for advice on statistical analysis methods. References. Allen, J. (1976) "Synthesis of Speech from Unrestricted Text," Proceedings of the IEEE 64, 433-442. Aronoff, M, (1976) Word Formation in Generative Grammar, Linguistic Inquiry Monograph 1, MIT Press, Cambridge, Massachusetts. Byrd, R. J. (1983a) "Word formation in natural lan- guage processing systems," Proceedings of IJCA1-VIII, 704-706. Byrd, R. J. (1983b) "On Restricting Word Formation Rules," unpublished paper, New York University. Cercone, N. (1974) "Computer Analysis of English Word Formation," Technical Report TR74-6, Depart- ment of Computing Science, University of Alberta, Edmonton, Alberta, Canada. Chodorow, M. S., R. J. Byrd, and G. E. Heidorn (1985) "Extracting Semantic Hierarchies from a Large On-line Dictionary," Proceedings of the Association for Compu- tational Linguistics, 299-304. Chomsky, N. and M. Halle (1968) The Sound Pattern of English, MIT Press. Cambridge, Massachusetts. Hayes, B. (1983) "A Grid-based Theory of English Meter," Linguistic Inquiry 14:3:357-393. Heidorn, G. E., K. Jensen, L. A. Miller, R. J. Byrd, and M. S. Chodorow (1982) "The EPISTLE Text- Critiquing System," IBM Systems Journal 21,305-326. Koskenniemi, K. (1983) Two-level Morphology: A Gen- eral Computational Model .for Word-form Recognition and Produclion, University of Helsinki, Department of General Linguistics. Kucera, H. and W. N. Francis (1967) Computational Analysis of Present-Day American English, Brown Uni- versity Press, Providence, Rhode Island. Marchand, H. (1969) The Categories and Types of Present-Day English Word-Formation, C.H.Beck'sche Verlagsbuchhandlung, Munich. Merriam (1963) Websters Seventh New Collegiate Dic- tionary, Merriam, Springfield, Massachusetts. Siegel, D. (1974) Topics in English Morphology, Doc- toral Dissertation, MIT, Cambridge, Massachusetts. Winograd, T. (1971) "An A. I. Approach to English Morphemic Analysis," A. I. Memo No. 241, A. I. Lab- oratory, MIT, Cambridge, Massachusetts. 127
1986
19
BRINGING NATURAL LANGUAGE PROCESSING TO THE MICROCOMPUTER MARKET THE STORY OF Q&A Gary G. Hendrix Symantec Corporation 10201 Torre Avenue Cupertino, CA 95014 OVERVIEW This is the story of how one of the new natural language processing products reached the marketplace. On the surface, it is the story of one NL researcher- turned-entrepreneur (yours truly) and of one product, Q&A. But this is not just my story: It is in microcosm the story of NL emerging from the confines of the academic world, which in turn is an instance of the old theme "science goes commercial." BACKGROUND In September of 1985, Symantec introduced its first commercial product, Q&A. Q&A is a $299 integrated business-productivity tool for the IBM PC/XT/AT and compatibles. It includes a file management system, a report generator, a word processor, a spelling checker and an "intelligent assistant" or "IA." The IA lets users manipulate databases and produce reports by issuing commands or asking questions in English. WHY Q&A IS IMPORTANT Q&A is important to everyone with an interest in natural language processing because it is bringing NL technology to the attention of the world at large. Already Q&A is the most widely used NL system ever developed, with hundreds of people now using it on a daily basis and thousands using it occasionally. Here is a small sampling of the reaction: * SoftSel, the largest US distributor of microcomputer software, publishes a biweekly "hot list" ranking its best-selling products. At the time of this writing, Q&A is number 3 on the list, below only dBASE III and Lotus 123, with monthly sales in the thousands. Some weeks Q&A has actually been above Lotus on the charts. * Every major publication addressing the IBM PC market in the U.S., Europe and Australia has written about Q&A, and their reviews have consistently been favorable. Infoworld gave Q&A a 5 disk rating--its highest. PC Week has called Q&A the "quintessential management tool." In the New York Times, Q&A received an unprecedented 2-part review, and it took honors as the "software product of the year" in Australia. In a comprehensive survey of file management systems for IBM PCs, Software Digest, which is widely considered to provide the microcomputer industry's most objective testing, gave Q&A the highest overall evaluation ever given to any product in any category. WHAT'S THE STORY BEHIND Q&A Compressing all detail, Q&A is a direct outgrowth of NL research conducted at SRI International in the 1970's. About the time SRI's LADDER project was winding down, the Apple Computer appeared on the scene and I got the crazy idea that it would be neat to build a LADDER for the Apple. The known problems were that there wasn't an INTERLISP for the Apple, LADDER's code was over i00 times larger than the Apple's 48K memory, the Apple was too slow, and LADDER couldn't be ported to new databases relevant to personal use. These turned out to be the easy problems. Getting Q&A off the ground eventually entailed starting two companies, coming close to going broke on multiple occasions, putting a number of personal friends through months or years of stress, building a new culture for AI/micro/ marketing cross fertilization, and learning about lawyers, finance, product marketing, PC DOS, C, advertising, PR, promotions, sales, and end users. But in the end, the product happened. CONCLUSION Thousands of people now own or use NL systems, and hundreds of thousands have read about them. The world of NL has changed, and new opportunities for research and commercialization abound. Acknowledgment: Major contributors to the design/implementation of Q&A's IA were Dan Gordon, Brett Walter and Denis Coleman.
1986
2
BULK PROCESSING OF TEXT ON A MASSIVELY PARALLEL COMPUTER Gary W. Sabot Thinking Machines Corporation 245 First St. Cambridge~ MA 02142 Abstract Dictionary lookup is a computational activity that can be greatly accelerated when performed on large amounts of text by a parallel computer such as the Connection Machine TM Computer (CM). Several algorithms for parallel dictionary lookup are discussed, including one that allows the CM to lookup words at a rate 450 times that of lookup on a Symbolics 3600 Lisp Machine. 1 An Overview of the Dictionary Problem This paper will discuss one of the text processing prob- lems that was encountered during the implementation of the CM-Indexer, a natural language processing program that runs on the Connection Machine (CM). The prob- lem is that of parallel dictionary lookup: given both a dictionary and a text consisting of many thousands of words, how can the appropriate definitions be dis- tributed to the words in the text as rapidly as possible? A parallel dictionary lookup algorithm that makes ef- ficient use of the CM hardware was discovered and is described in this paper. It is clear that there are many natural language pro- cessing applications in which such a dictionary algo- rithm is necessary. Indexing and searching of databases consisting of unformatted natural language text is one such application. The proliferation of personal comput- ers, the widespread use of electronic memos and elec- tronic mail in large corporations, and the CD-ROM are all contributing to an explosion in the amount of useful unformatted text in computer readable form. Parallel computers and algorithms provide one way of dealing with this explosion. 2 The CM: Machine Description The CM consists of a large number number of proces- sor/memory cells. These cells are used to store data structures. In accordance with a stream of instructions that are broadcast from a single conventional host com- puter, the many processors can manipulate the data in the nodes of the data structure in parallel. Each processor in the CM can have its own local variables. These variables are called parallel variables, or parallel fields. When a host computer program per- forms a serial operation on a parallel variable, that op- eration is performed separately in each processor in the CM. For example, a program might compare two paral- lel string variables. Each CM processor would execute the comparison on its own local data and produce its own local result. Thus, a single command can result in tens of thousands of simultaneous CM comparisons. In addition to their computation ability, CM pro- cessors can communicate with each other via a special hardware communication network. In effect, commu- nication is the parallel analog of the pointer-following executed by a serial computer as it traverses the links of a data structure or graph. 3 Dictionary Access A dictionary may be defined as a mapping that takes a particular word and returns a group of status bits. Sta- tus bits indicate which sets or groups of words a partic- ular word belongs to. Some of the sets that are useful in natural language processing include syntactic categories such as nouns, verbs, and prepositions. Programs also can use semantic characterization information. For ex- ample, knowing whether a word is name of a famous person (i.e. Lincoln, Churchill), a place, an interjection, or a time or calendar term will often be useful to a text processing program. The task of looking up the definition of a word con- sists of returning a binary number that contains l's only. in bit positions that correspond with the groups to which that word belongs. Thus, the definition of "Lincoln" contains a zero in the bit that indicates a word can serve as a verb, but it contains a 1 in the famous person's name bit. While all of the examples in this paper involve only a few words, it should be understood that the CM is efficient and cost effective only when large amounts of 128 Figure 1. Simple B~ng Dic~onwy A~rn, marking fwnoL= names Format of Processor Dis~'am: ] Strinq - Processor # J Fanlous-~t: ~ Figure 2. Sy~taci¢ Pnoge¢ Noun Loca=~ Fotma~ of Processor D~agram: S~na - Processor # I Ptoger-Noun-bit: Slac~ if Selected: a. Select processors containing "Lincoln': • bouz.4 Mcnae~,mge=- 5 .-6 b. Mark seiected processors as famous names: c. Select processors containing "Michaetangelo": d. Mark Selected processors as famous names: [;:: :MI;: ; Jl : ? Note: famous name is marked a. Select processors with an upper case, alphabetic first character b, Subselect for processors not at start of sentence: c. Mark selected processors as proper nouns: Proper Noun Proper Noun Ma~ed Mad~ed text are to be processed. One would use the dictionary algorithms described in this paper to look up all of the words in an entire novel; one would not use them to look up the ten words in a user's query to a question answering system. 4 A Simple Broadcasting Dictio- nary Algorithm One way to implement a parallel dictionary is to seri- ally broadcast all of the words in a given set. Processors that contain a broadcast word check off the appropriate status bits. When all of the words in one set have been broadcast, the next set is then broadcast. For exam- ple, suppose that the dictionary lookup program begins by attempting to mark the words that are also famous last names. Figure 1 illustrates the progress of the algo- rithm as the words "Lincoln" and then "Michaelangelo" are broadcast. In the first step, all occurrences of "Lin- coln" are marked as famous names. Since that word does not occur in the sample sentence, no marking ac- tion takes place. In the second step, all occurrences of "Michaelange]o" are marked, including the one in the sample sentence. In step d, where all processors containing "Michae- langelo" are marked as containing famous names, the program could simultaneously mark the selected pro- cessors as containing proper nouns. Such shortcuts will not be examined at this time. After all of the words in the set of ,famous names have been broadcast, the algorithm would then begin to broadcast the next set, perhaps the set containing the names of the days of the week. In addition to using this broadcast algorithm, the CM-Indexer uses syntactic definitions of some of the dic- tionary sets. For example, it defines a proper noun as a capitalized word that does not begin a sentence. (Proper nouns that begin a sentence are not found by this cap- italization based rule; this can be corrected by a more sophisticated rule. The more sophisticated rule would mark the first word in a sentence as a proper noun if it could find another capitalized occurrence of the word in a nearby sentence.) Figure 2 illustrates the progress of this simple syntactic algorithm as it executes. The implementation of both the broadcast algorithm and the syntactic proper noun rule takes a total of less than 30 lines of code in the *Lisp (pronounced "star- lisp ~) programming language. The entire syntactic rule that finds all proper nouns executes in less than 5 mil- liseconds. However, the algorithm that transmits word 129 F~ure 3. Unique Wot~ls Dk:~ot~y Impk~ent~on Fotma~ of ~ r Diagram: I RTnn~ - ~r~e~nr m Defintbon Bits: BBBB Oefinea-yet? O Btack if Selected: I a. Select all processors where d?-O (not yet defined). If no processors are selected, then algorithm terminates. Otherwise. find the minimum of the selected processor's addresses. '~Host Machine quickly determines that the minimum address is 1 b. Host machine pulls out word in that minimum procesor and looks up its definition in its own serial dictionary/hash table, In this case, the definition of "the" is determined to t~e the bit sequence 001. (The bits are the status bits discussed in the text.) Next, the host machine selects all processors containing the word whose definition was just looked up: c. The entire looked up definition is assigned to all selected prOcessors and all selected processors are marked as defined, d. goto a lists takes an average of more than 5 milliseconds per word to broadcast a list of words from the host to the CM. Thus, since it takes time proportional to the num- ber of words in a given set, the algorithm becomes a bottleneck for sets of more than a few thousand words. This means that the larger sets listed above (all nouns, all verbs, etc.) cannot be transmitted. The reason that this slow algorithm was used in the CM-Indexer was the ease with which it could be implemented and tested. 5 An Improved Broadcasting Dic- tionary Algorithm One improvement to the simple broadcasting algorithm would be to broadcast entire definitions (i.e. several bits), rather than a single bit indicating membership in a set. This would mean that each word in the dictio- nary would only be broadcast once (i.e. "fly" is both a noun and a verb). A second improvement would be to broadcast only the words that are actually contained in the text being looked up. Thus, words that rarely occur in English, which make up a large percentage of the dictionary, would rarely be broadcast. In summary, this improved dictionary broadcasting algorithm will loop for the unique words that are con- tained in the text to be indexed, look up the definition of each such word in a serial dictionary on the host ma- chine, and broadcast the looked-up definition to the en- tire CM. Figure 3 illustrates how this algorithm would assign the definition of all occurrences of the word "the" in a sample text. (Again, in practice the algorithm oper- ates on many thousands of words, not on one sentence.) In order to select a currently undefined word to look up, the host machine executing this algorithm must de- termine the address of a selected processor. The figure indicates that one way to do this is to take the min- imum address of the processors that are currently se- lected. This can be done in constant time on the CM. This improved dictionary lookup method is useful when the dictionary is much larger than the number of unique words contained in the text to be indexed. How- ever, since the same basic operation is used to broadcast definitions as in the first algorithm, it is clear that this second implementation of a dictionary will not be fea- sible when a text contains more than a few thousand unique words. By analyzing a number of online texts ranging in size from 2,000 words to almost 60,000 words, it was found that as the size of the text approaches many tens of thousands of words, the number of unique words in- creased into the thousands. Therefore, it can be con- cluded that the second implementation of the broad- casting dictionary algorithm is not feasible when there are more than a few tens of thousands of words in the text file to be indexed. 6 Making Efficient Use of Paral- lel Hardware In both of the above algorithms, the "heart" of the dic- tionary resided in the serial host. In the first case, the heart was the lists that represented sets of words; in the second case, the heart was the call to a serial dictionary lookup procedure. Perhaps if the heart of the dictionary could be stored in the CM, alongside the words from the text, the lookup process could be accelerated. 7 Implementation of Dictionary Lookup by Parallel Hashing One possible approach to dictionary lookup would be to create a hash code for each word in each CM processor in parallel. The hash code represents the address of a dif- ferent processor. Each processor can then send a lookup request to the processor at the hash-code address, where 130 Figure 4. I]lus~'atlon o$ Sorl FOml&t o~ Pt'oce.~.~r Oia~ri~m: [ ~;tnnn. pr~pq~nr J 1 f Definition Bits: BBBBJ O¢~inaI-Address: N Sla~ it Selected: a. Select all processors, set original address field to be the processor number : b. Call sort with string as the key, and string and N as the fields to copy. The final result is: the definition of the word that hashes to that address has been stored in advance. The processors that receive requests would then respond by sending back the pre- stored definition of their word to the address contained in the request packet. One problem with this approach is that all of the processors containing a given word will send a request for a definition to the same hashed address. To some ex- tent, this problem can be ameliorated by broadcasting a list of the n (i.e. 200) most common words in English, before attempting any dictionary lookup cycles. An- other problem with this approach is that the hash code itself will cause collisions between different text words that hash to the same value. 8 An Efficient Dictionary Algo- rithm There is a faster and more elegant approach to building a dictionary than the hashing scheme. This other ap- proach has the additional advantage that it can be built from two generally useful submodules each of which has a regular, easily debugged structure. The first submodule is the sort function, the second is the scan function. After describing the two submod- ules, a simple version of the fast dictionary algorithm will be presented, along with suggestions for dealing with memory and processor limitations. 8.1 Parallel Sorting A parallel sort is similar in function to a serial sort. It accepts as arguments a parallel data field and a par- allel comparison predicate, and it sorts among the se- lected processors so that the data in each successive (by address) processor increases monotonically. There are parallel sorting algorithms that execute in time propor- tional to the square of the logarithm of the number of items to be sorted. One easily implemented sort, the enumerate-and-pack sort, takes about 1.5 milliseconds per bit to sort 64,000 numbers on the CM. Thus, it takes 48 milliseconds to sort 64,000 32-bit numbers. Figure 4 illustrates the effect a parallel sort has on a single sentence. Notice that pointers back to the original location of each word can be attached to words before the textual order of the words is scrambled by the sort. 8.2 Scan: Spreading Information in Log- arithmic Time A scan algorithm takes an associative function of two arguments, call it F, and quickly applies it to data field values in successive processors of: • a *b • C •d • e The scan algorithm produces output" fields in the same processors with the values: • a • Fia, b) • F(r(a, b), c) • F(F(F(a, b), c), d) • etc. The key point is that a scan algorithm can take ad- vantage of the associative law and perform this task in logarithmic time. Thus, 16 applications of F are suf- ficient to scan F across 64,000 processors. Figure 5 shows one possible scheme for implementing scan. While the scheme in the diagram is based on a simple linked list structure, scan may also be implemented on binary trees, hypercubes, and other graph data structures. The nature of the routing system of a particular parallel com- puter will select which data structures can be scanned most rapidly and efficiently. 131 Figure 5. Illustration of Scan Format of processor Diagram: J StrJn~ - PrOCeSSOr Furcc~n va~e: F Backward pointer can be calculated (P is an proc admess= Fotwarclpoulter:P in constant time: all processors [ ~=~ if se~aact: send their own addresses to the processors pointed to by P. f is any associative function of two arguments a. Select all processors, initialize function value to string, forward pointer to self address + 1 : b. Get back pointer, get function value from processor at I~ack pointer, call this value 8F. Replace the current function value, F, with f(BF,F): f(e,f) l(l,g) t(g,n) P: 71~ P: ~J P: C. Calculate a forward pointer that goes twice as far as the current forward pointer. This can be done as follows: Get the value of P at the processor pointed to by your own P, and replace your own P with that new value: d. ff any processor has a valid forward pointer, goto b (the next execution of b has the following effect on the first 4 processors: a f(a,o) f( a, f(b,c)) ~a.b). f(c,O) P: 3 P: 4 P: S P: 6 Note that since f is associative, f(a, f(b, c)) is always equal to f(f(a,b), c), and f(f(a,b), f(c,d)) - f( f( f(a, b), c), d) When combined with an appropriate F, scan has ap- plications in a variety of contexts. For example, scan is useful in the parallel enumeration of objects and for re- gion labeling• Just as the FFT can be used to efficiently solve many problems involving polynomials, scan can be used to create efficient programs that operate on graphs, and in particular on linked lists that contain natural, lan- guage text. 8.3 Application of Scan and Sort to Dic- tionary Lookup To combine these two modules into a dictionary, we need to allocate a bit, DEFINED?, that is 1 only in processors that contain a valid definition of their word. Initially, it is 1 in the processors that contain words from the dictio- nary, and 0 in processors that contain words that come from the text to be looked up. The DEFINED? bit will be used by the algorithm as it assigns definitions to text words. As soon as a word receives its definition, it will have its DEFINED? bit turned on. The word can then begin to serve as an additional copy of the dictionary entry for the remainder of the lookup cycle. (This is the "trick" that allows scan to execute in logarithmic time.) First, an alphabetic sort is applied in parallel to all processors, with the word stored in each processor serv- ing as the primary key, and the DEFINED? bit acting as a secondary key. The result will be that all copies of a given word are grouped together into sequential (by processor address) lists, with the single dictionary copy of each word immediately preceding any and all text copies of the same word. The definitions that are contained in the dictionary processors can then be distributed to all of the text words in logarithmic time by scanning the processors with the following associative function f: x and y are processors that have the following fields or parallel variables: STRING (a word) DEFINED? (i if word contains a correct definition) ORIGINAL-ADDRESS (where word resided before sort) DEFINITION (initially correct only in dictionary words) /. function f returns a variable containing the same four fields. This is a pseudo language; the actual program was written in *Lisp. function f(x,y): f.STRING = y. STRING f.0RIGINAL-ADDRESS = y. ORIGINAL-ADDRESS if y. DEFINED?= 1 then { ;; if y is defined, just return y f.DEFINED? = 1 f.DEFINITION = y. DEFINITION } if x. STRING= y. STRING then { ;; if words are"the same, take ;; any definition that x may have f.DEFINED? = x. DEFINED? f.DEFINITION = x•DEFINITIDN } else else ;; no definition yet f.DEFINED? = 0 ;; note that text words that are not found in the ;; dictionary correctly end up with DEFINED? = O This function F will spread dictionary definitions from a definition to all of the words following it (in processor address order), up until the next dictionary word. Therefore, each word will have its own copy of the dictionary definition of that word. All that remains is to have a single routing cycle that sends each def- inition back to the original location of its text word. • Figure 6 illustrates the execution of the entire sort-scan algorithm on a sample sentence. 132 Figure 6. Illuswation of Sort-Scan Algorithm Formal of Processor Diagram: J Sttmn. Dr~o~nr ~ J Oefmea? D Definition Bits: BBB8 Black ~ Selected: OriginaYAddress: N a. Both the dictionary words and the text words are stored in the CM: I IL I I I Text Dictionary b. Peform an alphabetic sort: (Merge dictionary into text) c. Scan using] the F described in the text: Definition Definition not used not in dictionary Send definition back to original address I il Text I Dictionary 8.4 Improvements to the Sort-Scan Dic- tionary Algorithm Since the CM is a bit serial Machine, string operations are relatively expensive operation. The dictionary func- tion F described above performs a string comparison and a string copy operation each time it is invoked. On a full size CM, the function is invoked 16 times (log 64K words). A simple optimization can be made to the sort-scan algorithm that allows the string comparison to be performed only once. This allows a faster dictio- nary function that performs no string comparisons to be used. The optimization consists of two parts. First, a new stage is inserted after the sorting step, before the scan- ning step. In this new step, each word is compared to the word to its left, and if it is different, it is marked as a "header." Such words begin a new segment of iden- tical words. All dictionary words are headers, because the sort places them before all occurrences of identical words. In addition, the first word of each group of words that does not occur in the dictionary is also marked as a header. Next, the following function creates the field that will be scanned: ;; header-p is a parallel boolean variable that is ;.; true in headers, false otherwise function create-field-for-scan(header-p): ;define a type for a large bit field vat FIELD : record ;;most significant bits contain ;;processor address ADDRESS ;;least significant bits will ;;contain the definition DEFINITION end ;initialize t o address O, no definition FIELD.ADDRESS = O FIELD.DEFINITION = O ; next, the headers that are dictionary words store ;; their definitions in the correct part of FIELD ;; Non-dictionary headers (text words not found ;; in dictionary) are given null definitions. if header { FIELD.DEFINITION = definition ;; self-address contains each processor\'s ;; own unique address FIELD.ADDRESS = self-address } return(FIELD) Finally, instead of scanning the dictionary function across this field, the maximum function (which returns the maximum of two input numbers) is scanned across it. Definitions will propagate from a header to all of the words within its segment, but they will not cross past the next header. This is because the next header has a greater self-address in the most significant bits of the field being scanned, and the maximum function selects it rather than the earlier headerg smaller field value. If a header had no definition, because a word was not found in the dictionary, the null definition would be propagated to all copies of that word. The process of scanning the maximum function across a field was determined to be generally useful. As a re- sult, the max-scan function was implemented in an effi- cient pipelined, bit-serial manner by Guy Blelloch, and was incorporated into the general library of CM func- tions. 133 Figure 7. I[lu~n ot tmprovemenls ~ Soot-Scan Algo~flm a. After sort, detect the headers (words different from lef~ neighbor) b. In headers only, set the A to the self address and the D to the definition, if there is one. c. Scan the Maximum function across the A:D field. d. Copy definition bits from D to B, and set D? if defined. Etc. Figure 7 illustrates the creation of this field, and the scanning of the maximum function across it. Note that the size of the field being scanned is the size of the def- inition (8 bits for the timings below) plus the size of a processor address (16 bits). In comparison, the earlier dictionary function had to be scanned across the def- inition and the original address, along with the entire string. Scanning this much larger field, even if the dic- tionary function was as fast as the maximum function, would necessarily result in slower execution times. 8.5 Evaluation of the Sort-Scan Dictio- nary Algorithm The improved sort-scan dictionary algorithm is much more efficient than the broadcasting algorithms described earlier. The algorithm was implemented and timed on a Connection Machine. In a bit-serial computer like the CM, the time needed to process a string grows linearly with the number of bits used to store the string. A string length of 8 characters is adequete for the CM-Indexer. Words longer than 8 characters are represented by the simple concatenation of their first 4 and last 4 characters. ASCII characters therefore require 64 bits per word in the CM; 4 more bits are used for a length count. Because dictionary lookup is only performed on al- phabetic characters, the 64 bits of ASCII data described above can be compacted without collision. Each of the twenty-six letters of the alphabet can be represented using 5 bits, instead of 8, thereby reducing the length of the character field to 40 bits; 4 bits are still needed for the length count. Additional compression could be achieved, perhaps by hashing, although that would in- troduce the possibilitY of collisions. No additional com- pression is performed in the prototype implementation. The timings given below assume that each processor stores an 8 character word using 44 bits. First of all, to sort a bit field in the CM currently takes about 1.5 milliseconds per bit. Second, the func- tion that finds the header words was timed and took less than 4 milliseconds to execute. The scan of the max function across all of the processors completed in under in 2 milliseconds. The routing cycle to return the definitions to the original processors of the text took approximately one millisecond to complete. As a result, with the improved sort-scan algorithm, an entire machine full of 64,000 words can be looked up in about 73 milliseconds. In comparison to this, the original sort-scan implementation requires an additional 32 milliseconds (2 milliseconds per invocation of the slow dictionary function), along with a few more milliseconds for the inefficient communications pattern it requires. This lookup rate is approximately equivalent to a serial dictionary lookup of .9 words per microsecond. In comparison, a Symbolics Lisp Machine can look up words at a rate of 1/500 words per microsecond. (The timing was made for a lookup of a single bit of infor- mation about a word in a hash table containing 1500 words). Thus, the CM can perform dictionary lookup about 450 times faster than the Lisp Machine. 8.6 Coping with Limited Processor Re- sources Since there are obviously more than 64,000 words in the English language, a dictionary containing many words will have to be handled in sections. Each dictionary pro- cessor will have to hold several dictionary words, and the look-up cycle will have to be repeated several times. These adjustments will slow the CM down by a multi- plicative factor, but Lisp Machines also slow down when large hash tables (often paged out to disk) are queried. There is an alternative way to view the above algo- rithm modifications: since they are motivated by limited processor resources, they should be handled by some sort of run time package, just as virtual memory is used to handle the problem of limited physical memory re- sources on serial machines. In fact, a virtual processor facility is currently being used on the CM. 134 9 Further Applications of Scan to Bulk Processing of Text The scan algorithm has many other applications in text processing. For example, it can be used to lexically parse text in the form of 1 character per processor into the form of 1 word per processor. Syntactic rules could rapidly determine which characters begin and end words. Scan could then be used to enumeral:e how many words there are, and what position each character occupies within its word. The processors could then use this in- formation to send their characters to the word-processor at which they belong. Each word-processor would re- ceive the characters making up its word and would as- semble them into a string. Another application of scan, suggested by Guy L. Steele, Jr., would be as a regular expression parser, or lexer. Each word in the CM is viewed as a transition matrix from one set of finite automata states to another set. Scan is used, along with an F which would have the effect of composing transition matrices, to apply a finite automata to many sentences in parallel. After this application of scan, the last word in each sentence con- tains the state that a finite automata parsing the string would reach. The lexer's state transition function F would be associative, since string concatenation is asso- ciative, and the purpose of a lexer is to discover which particular strings/tokens were concatenated to create a given string/file. The experience of actually implementing parallel nat- ural language programs on real hardware has clarified which operations and programming techniques are the most efficient and useful. Programs that build upon general algorithms such as sort and scan are far, easier to debug than programs that attempt a direct assault on a problem (i.e. the hashing scheme discussed earlier; or a slow, hand-coded regular expression parser that I im- plemented). Despite their ease of implementation, pro- grams based upon generally useful submodules often are more efficient than specialized, hand-coded programs. Acknowledgements I would like to thank Dr. David Waltz for his help in this research and in reviewing a draft of this paper. I would also like to thank Dr. Stephen Omohundro, Cliff Lasser, and Guy Blelloch for their suggestions concerning the implementation of the dictionary algorithm. References Akl, Selim G. Parallel Sorting Algorithms, 1985, Aca- demic Press, Inc. Feynman, Carl Richard, and Guy L. Steele Jr. Connec- tion Machine Maeroinstruction Set, REL 2.8, Thinking Machines Corporation. (to appear) Hillis, W. Daniel. The Connection Machine, 1985, The MIT Press, Cambridge, MA. Lasser, Clifford A., and Stephen M. Omohundro. The Essential *Lisp Manual, Thinking Machines Corpora- tion. (to appear) Leiserson, Charles, and Bruce Maggs. "Communication- Efficient Parallel Graph Algorithms," Laboratory for Computer Science, Massachusetts Institute of Technol- ogy. (to appear) (Note: scan is a special case of the treefix algorithm described in this paper.) Omohundro, Steven M. "A Connection Machine Algo- rithms Primer," Thinking Machines Corporation. (to appear) Resnikoff, Howard. The Illusion of Reality, 1985, in preparation. Waltz, David L. and Jordan B. Pollack. "Massively Par- allel Parsing: A Strongly Interactive Model of Natural Language Interpretation," Cognitive Science, Volume 9, Number 1, pp. 51-74, January-March, 1985. 135
1986
20
THE INTONATIONAL STRUCTURING OF DISCOURSE Julia Hirschberg and Janet Pierrehumbert AT&T Bell Laboratories 600 Mountain Avenue Murray Hill NJ 07974 USA ABSTRACT We propose a mapping between prosodic phenomena and semantico-pragmatic effects based upon the hypothesis that intona- tion conveys information about the intentional as well as the atten- tional structure of discourse. In particular, we discuss how varia- tions in pitch range and choice of accent and tune can help to con- vey such information as: discourse segmentation and topic struc- ture, appropriate choice of referent, the distinction between 'given' and 'new' information, conceptual contrast or parallelism between mentioned items, and subordination relationships between proposi- tions salient in the discourse. Our goals for this research are prac- tical as well as theoretical. In particular, we are investigating the problem of intonational assignment in synthetic speech. 1. Introduction The role of prosody in discourse has been generally ack- nowledged but little understood. Linguistic pragmaticists have noted that types of information status (such as glven/new, toplc/comment, focus/presupposition) can be intonationally 'marked' [1,2,3,4], that reference resolution may depend criti- cally on intonation [5, 6], that intonation can be used to disambigu- ate among potentially ambiguous utterances [7,8], and that indirect speech acts may be signalled by intonational means [9,10,11]. Conversational analysis of naturally occurring data has found that speakers may signal topic shift, digression, and interruption, as well as turn-taking, intonation- ally [12, 13, 141 . And the fact that intonational contours contribute in some way to utterance interpretation is itself unexception- able [8]. To date, however, identification of the prosodic phenomena involved -- and the proper mapping between thcse phenomena and their semantico-pragmatic effects -- has been largely intuitive, and the intonational phenomena involved have not been precisely described. Here, we describe how certain of the resources of the intona- tional system are employed in discourse. In particular, we discuss how speakers' choice of pitch range, accent, and tune contribute to the intentional and attentlonal structuring of discourse -- the way speakers communicate the relationships among their discourse goals and the relative salience of entities, attributes, and relation- ships mentioned in the discourse) Our findings emerge from an intensive study of a simple example of speech synthesis: the script of a computer-aided instruction system, TNT (Tutor 'n' Trainer) [16], which employs synthetic speech to tutor computer novices in the text editor vi. Using the Text to Speech system (TTS) i17], we have been able, by systematic variation of pitch 1. Grosz and Sidaer [15] propose a tripartite view of discourse structure: a llngnlstlc structure, which is the text/speech itself; an attentlonal struc- ture, including information about the relative salience of objects, properties, relations, and intentions at a given point in the discourse; and an Intentional structure, which relates dlscourse segment purposes (those purposes whose recognition is essential to a segment achieving its intended effect) to one another. range and by a principled choice of accent and tune, to highlight the structure of the tutorial text and thus to enhance its coherence. While most studies of how intonation is used in discourse, have been based solely on examination of intonational contours found in a natural corpus, we have found that intonation synthesis provides a unique opportunity to manipulate the dimensions of variation orthogonally. Thus we can pinpoint factors crucial for a given effect and evaluate various patterns for a given utterance and con- text. 2. The Domain TNT was designed to teach computer-naive subjects vi, a simple UNIX screen-oriented text editor. The tutorial portion pro- vides a brief introduction to word processing, to general features of vi, and to the tutor's help facilities; the tutor then guides subjects through a series of learning tasks of graduated difficulty. While the overall task structure is implicit in the tutorial text, the sub- ject can influence the course of the interaction via his/her manipu- lation of a set of 'helper' keys; these keys provide hints (HINT) and reminders (REMIND) as well as the option of starting a task over again (DO OVER) or suspending the tutorial temporarily (HOLD). The fact that TNT is explicitly task-oriented, 2 makes it a good test-bed for our purposes. An appropriate segmentation of the text, and a notion of the purpose of each segment and the hierarchical relationships among segments, can be independently determined from the task at hand. Also, certain characteristics of the text presented a particularly interesting challenge for our study. First, the script contains little pronominal reference and very few so-called clue words - words and phrases such as now, next, returning to, but, and on the other hand, which can identify discourse segment boundaries and relationships among segments, signal interruptions and digressions, and so on [19,20]. Both of these phenomena (together with intonation) have been identified as important strategies for communicating discourse struc- ture [15,18,19]. Their virtual absence from the text presents a convenient opportunity for testing the power of intonation to structure a discourse. Second, while we were not able to isolate points in the text where subjects had special difficulties, we did informally observe certain general problems with turn-taking 3 in the tutor -- specifically, it was not always clear when the tutor's turn was over - which we addressed in our synthesis of the text. 3. FO Synthesis To synthesize the fundamental frequency (f0) contours for the TNT script, we used the intonation synthesis program 2. That is, the tutorial is organized around a series of data processing tasks, which the subject is guided through. See [18] for discussion of the characteristics of task-oriented domain discourse. 3. The process by which speakers signal that they have (temporarily) finished speaking and by which hearers interpret such signals [21]. 136 described in [22] in [23, 24]. It permits explicit control over the dif- ferent dimensions of variation in the intonation system. The dimensions we will discuss here are phrasing, pitch range, accent location, and tune. We illustrate each in our synthesis of the introduction to TNT: i. Hello. TISO F.96 H* L L~ 2. Welcome to word processing. TISO F.g6 H* H* L L~ 3. That's using a computer to write letters and T136 F.90 H* H* H* H* H* reports. H* L L~ 4. Word processing makes typing easy. T136 F.96 H* H* H* H~ L L~ 5. Make a typo? T125 L* H* H H~ S. No problem. TII5 F.96 H* H* L L~ 7. Just back up, type over the mistake, TI15 F.96 H* H* L H~ H* H* L H~ and it's gone. H* L L~ 8. And, it eliminates retyping. T125 H* L H~ H* H* L L~ 9. Need a second draft? T115 L* L* H* H H~ 10. No problem. Tl15 F.96 if* H* L L~ 11. Just change the first, and you've got the TII5 F.87 H* H* H* L H~ H* second. H* L L~ 12. Today, the computer will teach you word T150 F.g6 H* L HS H* H* H* processing. L L~ 13. The computer is new at thiS, so be a good T136 F.gO H* H* L H~ H* student and give it a chance. H* H* H* L L~ 14. We can't answer questions, if you are T136 F.96 H* H* H* L H~ confused. H* L L~ 15. We have to let the computer do all the T125 F.93 H* H* H* teaching. H* L b~ 16. But if ~he computer is not working right, we T125 F.87 H* H* L H~ H* will help you out. H* H* L L~ Figure 1. The TNT Introduction In Figure 1 and in all figures below, 'T' indicates the top of the pitch range in Hz, 'F' indicates amount of compression of the pitch range at the end of declarative phrases, 'H' and 'L' indicate high and low tones, '*' indicates a tone's alignment with a stressed syll- able, and '%' indicates a phrase boundary tone. We discuss these phenomena and our notational system in more detail below. 3.1 Phrasing The first dimension of variation, phrasing, may be indi- cated by a pause, by a lengthening of the phrase-final syllable, and by the occurrence" of extra melodic elements on the end of the phrase. Variation in phrasing is illustrated in Figures 2 and 3. 4 In Figure 2, line 8 is produced as a single phrase, whereas in Figure 3, And is set off as a separate phrase. One consequence of this strategy is that And becomes more prom- inent in the second version. Phrasing variation will not be of cen- tral concern here. Because of the syntactic simplicity of TNT, there were only a few cases where the phrasing could be varied in interesting ways. 4. Note that phonetic transcriptions given in these and subsequent figures represent the somewhat eccentric output ofthe TTS system. 150 _ t25 100 0 isti l i mulnei i s i !aiJp ing 1] I I I I I I I I I I I I 0.5 1.5 2 2.5 AND IT ELIMINATES RETYPING Figure 2. One Phrase '150 425 100 75 oe00°.j i jr el' ! , rii t oi p inq I I I I • I I I I~ f 1 I J I I 0 0.5 1.5 2.5 AND, IT ELIMINATES RETYPING Figure 3. Two Phrases 3.2 Pitch Range When a speaker raises his/her voice, his/her overall pitch range - the distance betweer~ the highest point in the f0 contour and the speaker's baseline (defined by the lowest point a speaker realizes over all utterances) -- is expanded. Thus, the highest points in the contour become higher and other aspects are propor- tionately affected. Figure 4 shows an f0 contour for line 1 in the scriot above in the default pitch range used by TTS. 150 125 t00 75 . h e I o I I l I I I I I 0 0.5 HELLO Figure 4. TTS Default Pitch Range Figure 5 shows the contour actually used in synthesizing the TNT script. 137 450 125 100 75 0.5 HELLO Figure 5. Actual Pitch Range The shape of the actual contour is the same as in Figure 4 but its scaling is different. Changes in pitch range appear to reflect the overall structure of the discourse, with major topic shifts marked by marked increases in pitch range. In addition to variations in overall pitch range, the intona- tion system exploits a local time-dependent type of pitch range variation, called final lowering. In the experiments reported in [24], it was found that the pitch range in declaratives is lowered and compressed in .anticipation of the end of the utterance. Final lowering begins about half a second before the end and gradually increases, reaching its greatest strength right at the end of the utterance. This phenomenon appears to reflect the degree of 'final- ity' of an utterance; the more final lowering, the more the sense that an utterance 'completes' a topic is conveyed. Contrast Fig- ures 6 and 7. 125 1oo- /\ "?5- \ n ai s t o k ~g t uu!y uu 1 I I l I I I I I I i 0 0.5 I 4.5 NICE TALKING TO YOU I I I Figure 0. With Final Lowering 125 - \ 1oo 75- \ - I I - 1 I nl ai is t o k ing t !auyl uu / I I il I l I J , li i i i il i i 0 0.5 1 1.5 NICE TALKING TO YOU I I 1 1 Figure 7. Without Final Lowering In the notational system employed here, T represents the topline, of a phrase -- the maximal value for the f0 contour in the phrase. F expresses the amount of final lowering in terms of the ratio of the lowered pitch range to the starting pitch range. The default value assumed below for T is 115 Hz and for F is 0.87. 3.3 Accent Pitch accents, which fall on the stressed syllable of lexical items, mark those items as intonationally prominent. In line 16, for example, right has no pitch accent. If right were to be especially emphasized, it would have an accent. (In our notation, the absence of a specified accent indicates that a word is not accented; where we wish to highlight this point, we will employ '-' to mark a deac- cented word.) The contrasting outcomes are shown in Figures 8 and 9. 75I !!J !!! L!! = i/,' u .., ,,, /-- / olt wer'kingr oi t i i I 0.5 1 1.5 2 2.5 BUT IF THE COMPUTER IS NOT WORKING RIGHT Figure 8. Right Deaccented 150 _ 125 I00 75 bl tl J_ I I f duhkuimmp y L~u / I I II I I 1 I I I 0 O. 5 I Ieni z v a It w, er k ilgr a i t i i i I i~ l [ I I I 1.5 2 2.5 BUT IF THE COMPUTER IS NOT WORKING RIGHT Figure 9. Right Accented In the first case, the last f0 peak occurs on work and there is a fall to a low pitch on right, then a rise at the end of the phrase. In the second case, the entire peak-fall-rise configuration occurs on the word right. There are six types of pitch accent in English [23], two sim- ple tones -- high and low -- and four complex ones. The most fre- quently used accent, the simple high tone, comes out as a peak on the accented syllable (as, on right in Figure 9) and will be represented below as H*. The 'H' indicates a high tone, and the '*' that the tone is aligned with a stressed syllable. In some cases, we have used a L* accent, which occurs much lower in the pitch range than H* and is phonetically realized as a local f0 minimum. The accent on make in Figure 13 below is a L*. The other English accents have two tones. Figure 10 shows a version of the sentence in Figures 2 and 3 with a L+tt* accent substituted for both H* accents in the second phrase. 138 t50 _ t25 t00 :l 75- I 0 ae nnd ~ i~ i I i rr Ir i ! I I I I I I l I I 0.5 I AND, IT ELIMINATES Figure 10. An L+H* i i I , eilt s rfii l ,ai ,p i iq I , I II I I I I I 1 t.5 2 2.5 RETYPING Accent Note that there are still peaks on the stressed syllables, but now a striking valley occurs just before each peak. In our synthesis of the TNT script, we have made extensive use of the type of accent transcribed in [23] as H*+L. This accent, like other bitonal accents, triggers a rule which compresses the pitch range on following material in the phrase, a phenomenon known as downstep or catathesls. For example, a simple con- trast between H* H* and H*+L H*+L is illustrated in Figures 11 and 12 in two versions of the tutorial command to hit the 'remind' helper key -- Hit remind. '257 0 0.5 HIT REMIND nn d t50 125 100 75 Figure 11. H* H* L L% / 1 I i t r i m II i I 0 0.5 i i ,I I I I d I I HIT REMIND I I Figure 12. H*+L H*+L L L~o We have made particular use of downstepped contours such as this -- i.e., sequences of H*4-L tones -- which we will term H*+L sequence in the discussion below. (See Section 4.3.) The way a speaker is structuring a text helps to determine where pitch accents will fall, as a speaker indicates how referents of accented or deaccented items are related to other items in the utterance or in some larger context. In addition to pitch accents, each intonational phrase has a phrase accent and a boundary tone. These two extra tones may be either L or H. The boundary tone (indicated by '~o') falls exactly at the phrase boundary, while the phrase accent (indicated by an unadorned H or L) spreads over the material between the last pitch accent and the boundary tone. Each intonational phrase contains one or more pitch accents, a phrase accent, and a boun- dary tone. 3.4 Tune A phrase's tune or melody is defined by its particular sequence of pitch accents, phrase accent, and boundary tone. Thus, H* L L~o represents a tune with a H* pitch accent, a L phrase accent, and a L~:~ boundary tone. This is an ordinary declarative pattern with a final fall. A interrogative contour is represented by L* H H~o. The contrast between these two melo- dies is illustrated in Figures 13 and 14. Figure 13 shows the actual f0 contour for line 5 of the TNT introduction, produced as a ques- tion. '150 125 '100 75 i /i i / J f * m el k~it t ai p o i I I I I I I I I ti I i 0 0,5 I MAKE A TYPO ? I I I Figure 13. Interrogative Contour Figure 14 shows a declarative pattern for the same sentence. 150 125 I t00 75 I ___/\,__/\ , ',.__. rn ei k ,if t r ai p o I I I I I II i I I d B 0.5 I MAKE A TYPO. I I Figure 14. Declarative Contour With the declarative intonation characteristic of imperatives, 5 would probably convey that the hearer was being ordered to pro- duce a typo. Roughly speaking, the tune appears to convey infor- mation about speaker attitudes and intentions (as, the speech act the speaker intends to perform) and about the relationship between utterances in a discourse. 139 4. Intonational and Discourse Phenomena The major questions underlying our research are: First, what is the relationship between particular'-,intonational phenomena and particular discourse phenomena?~ For example, what discourse phenomena are associated with eh£~iges in pitch range? With the accenting or deaccenting of particular lexical items? With choice of tune? More generally, we also characterize the contributions of these intonational phenomena in terms of the theory.of, discourse structure developed in [15], by relating intoua- tionM .contributions to aspects of intentional and attentional discourse structure. Second, how do int0na~ionul features such as these interact with one another? Does an expansion of pitch range affect the interpretation of a rise-fall-rise contour [25], for exam- ple, and if so how? Third, when several discourse features predict conflicting intonational strategies, how is a decision made? When the information represented by a single referring expression, for example, is both 'given' and 'contrastive' -- and thus both deac- centable and accentable -- how is the choice to be made? 4.1 Pitch Range Manipulation Students of discourse commonly observe that discourses often exhibit a hierarchical structure - into major topics, their subtopics, sub-subtopics, and so on. In task-oriented domains, it has been claimed that this structure reflects the hierarchical struc- ture of a task and its subtasks [18]. So, for example, the TNT introduction above might be segmented as follows (where utter- ances are labeled by line number): 5 Table 1. Segmenting the TNT Introduction {0 3 4 6 7 9 10 11 This bracketing schema defines a discourse segment as any node together with all the nodes it dominates; for example, lines 1-11 form a segment, as do lines 14-16, and so on. An alternative depic- tion of the hierarchy above would be [{0}[1/2 3 [4 [5 6 7] [8 9 10 11]]] [12 [13] [14 15 16]]]fi Evidence for such hierarchical segmenta- tion in general is found in instances of pronominal reference to referents linearly distant in the discourse; in such cases, a notion of hierarchical proximity appears plausible. Previous research [12,14] has observed that 'topic jump' can be signalled by raised pitch, as well as increased a~plitude and markers of self-editing, hesitation, and discontinuity -.and that pauses and changes in rate characterize segment boundaries. In our work with the TNT script, we found that a hierarchical seg- mentation of discourse can be marked by systematic variation in pitch range, which can signal movement betweeen levels in the seg- ment hierarchy. In addition, by varying the amount of final rais'ihg or lowering at the end of phrases, we can indicate the degree of conceptual continuity between one phrase and the next. We have developed algorithms for assigning pitch range and fiual raising/lowering in terms of the discourse segmentation. 5. We do not claim this is the only possible segmentation, only that it is a plausi- ble one to convey. 6. Note that I and 2 are treated as a unit here, although they are synthesized as separate phrases, since it seemed semantically correct. To illustrate the algorithms, we relate the TNT introduction presented in Figure 1 to its segmentation in Table 1. When the introduction is synthesized using the TTS'default pitch range of 75-11~ Hz, the topline for each utterance will remain around 115 Hz. However, the hierarchical relationship schematized above among the various segments may be signalled more clearly if the pitch range is varied. In our version of the script, each segment boundary is marked by a variation in pitch range which correlates with the segment's position in the overall discourse. So, major boundaries are denoted by the largest increases, with smaller increases marking subsegment boundaries, and so on. The segment beginning at 1, for example, is marked by raising the f0 topline to 150 Hz; that beginning at 14, by raising the topline to 136 Hz; and that beginning at 15, by raising the topline to 125 Hz. 7 Human speakers do seem to employ a wider spectrum of pitch range varia- tion than we have been able to use in synthesis, however. We would claim that the appropriateness of changes in pitch range is a function of the segmentation hierarchy -- and is not inherent in the utterance in isolation. Our algorithm for pitch range assignment can in fact enforce one segmentation of a given discourse over another and, in so doing, can disambiguate among potentially ambiguous reference resolutions. For example, It in line 7 of Figure 1 coindexes mistake, while it in line 8 coindexes word processing. A simple linear approach to reference resolution (such as [26] ) would have the second coindexical with the previous noun-phrase (np), mistake, but a hierarchical approach to discourse structure holds out the possibility that a referent in a segment dominating the current segment may also provide a referent [18], as, in fact, is the case here. While a little thought will make the appropriate referent clear, it is clearer when line 8 is produced with a larger pitch range to signal the beginning of a new subseg- ment of the segment headed by 4. By so doing, we lessen the possi- bility that a referent for this it will be sought in lines.:5-7..The most likely candidate, found in 4, is now both intonationally and conceptually 8's superordinate discourse segment. While an increase in the pitch range indicates segment boun- daries, a decrease in the final lowering effects can indicate the absence of such boundaries, and thus indicate that a given utter- ance and one which follows it are part of the same segment. So, manipulation of final lowering can also serve to indicate discourse structure, by identifying the internal structure of segments. For example, at one point in the TNT script, the following utterance constitutes an entire discourse segment, so it has ,the default final lowering (F=0.87); in consequence, the L~ tone at the end of had will be only 87% as high as it would have been if final lowering had not applied. F.87 Type had. H* H* L L~ Compare this with: Type had. F.93 H* H* L L~ When you're done, H* L H~ hit changer. H* L L~ Here, the same utterance is synthesized with less final lowering -- the L~ tone at the end of had, in particular, will attain 93% of its target height. In this segment, the first line does not end the seg- ment. We further propose that the degree of final lowering may correlate with the utterance's position in the discourse hierarchy. Specifically, we suggest that minimal final lowering may indicate a 'push' onto the segment stack and greater degrees of final lowering Our choice of ranges was determined in part by the TTS synthesizer, which tends to sound best when its topline ranges between 115-150 Hz. Preliminary investigation of pitch range changes in human speech indicates that, for male speakers, these choices are reasonable. Note also that it is the relationship among different range levels, not the actual values in Hz, which is important here. 140 may be associated with 'pops' of this stack. In our synthesis of the TNT text, we have varied degree of final lowering for such 'pops' based upon the level of the segment which this utterance 'com- pletes' (or, equivalently, the level of the segment the next utterance begins). So, to determine the amount of final lowering to assign when synthesizing line 7 in the TNT introduction, we first deter- mine whether it completes a segment (representing a pop) or not (representing a push). If the former, we may note either that it completes the segment begun at line 5 (with a topline of 136 Hz), or that the subsequent segment is begun (by line 8) with a topline of 136 Hz. We assign final lowering of 0.90 when synthesizing line 7 based on either observation; this rather large amount (close to the synthesizer's default maximum of 0,87) conveys a relatively impor- tant change of subtopic within the larger discourse segment by indicating rather more disjunction than we would want, for exam- ple, between lines 9 and 10. We are currently testing the associations between pitch range/ final lowering variation and discourse structure proposed above in several ways: by pitch-tracking a large corpus of natural speech, 8 by recording and analyzing subjects reading structured texts, and by asking subjects to perform tasks such as reference resolution from texts synthesized with varying pitch ranges. 4.2 Accent Placement Accent placement, too, can convey information about the structure of a discourse. Traditionally, it has been noted that stress, or accent, can convey information about the focus of an utterance, about given or new information in the discourse, o about parallelism, or about contrastiveness. In more general terms, one might say that accent placement appears to be associated with Grosz and Sidner's [15] attentlonal structure -- the salience of discourse entities, properties, relations, and intentions at any point in the discourse. We have particularly noted that the decision to accent or deaccent some item is sensitive to the position of that item in the discourse structure - that is, just as salience is always determined relative to some particular context, accent placement must be determined with respect to the segment in which the accentable item appears. We take the position that it is the signal- ing of salience relative to the discourse segment that produces the secondary effects of given-new distinction, topic-hood or contras- tiveness, and the favoring of one reference resolution over another. One of the more common observations about the role of accent placement and the structuring of discourse is that accent can mark some item in the discourse as in focus - i.e., as 'what is being talked about' [28,29} -- particularly when syntactic or thematic information might predict otherwise. For example, in the following instructions, erase is accented in line 2 to indicate that the action of 'erasing' is the focus of the current task. 1. Type hello. H* H* L L~ 2. Next, let's erase hello. H* L H~ H* H* - L L~ 3. Hit hint. H* H* L L~ For similar reasons, we accent hello in line 1 and deaccent it in line 2. While focus considerations clearly influence accent place- ment, determining accent placement solely on the basis of utterance-level focus (as proposed in Gussenhoven [29} and Cull- From interviews collected by A. Kroch and G. Ward and from recordings made of a radio financial advise program by J. Hirschberg and M. Pollack. Prince [271 notes that the 'given/new' distinction has been variously defined as predictable/unpredictable, salient/not salient, shared/not shared knowledge, and proposes a more complex taxonomy of 'assumed familiarity' classifying discourse entities as new, inferrable, or evoked (either textually or situation- ally). This is closely related -- and often confused with -- the notion of utter- ance topic/focus. cover and Rochemont [30] ) is insufficient. Considerations such as the given/new distinction play an important role. Speakers typically deaeeent given information and accent new information, as when the 'new' information typing is accented and the 'old' word processing is not in line 3 below: Welcome to word processing. B* H* L L~ That's using a computer to write letters H* H* H* H* H* and reports. H* L L~ Word processing makes typing easy. H* H* H* L L~ Note that these items are marked as 'given' and 'new' within the current segment -- although they may have other status within the larger discourse. Furthermore, items appear 'given' or 'new' not simply because of prior mention (or lack thereof) in a context but via 'physical co-presence', where speaker, hearer, and referents are physically and openly present together; [31] shared world knowledge; or conceptual proximity [11. For example, the tutor can treat m as given in the following text because the student has just (incorrectly) typed mary; the character 'm', the student, and the tutor, are thus physically copresent. Oops. capital m. H* L H~ H* - L L~ The new information is that 'm' is to be capitalized. Thus cap~al is accented. Similarly, in the introduction to the tutor presented in Figure 1, we can deaccent m~take because it is a super-cQncept of the previously mentioned typo: Make a type? L* H* H H~ No problem. H* H* L L~ Just back up, type over the mistake, H* H* L H~ H* H* L H~ and it's gone. H* L L~ We also examine how pronominalization interacts with accent placement. Since the ability to pronominalize is itself a standard test of givenness, prowords, like other given items, are commonly deaecented. If they are accented, the hearer may draw very different conclusions from an utterance. The following utter- ance, for example, may well convey an instruction to type the word something or even a reprimand for not typing anything yet: Let's hegln by typing something. H* H* H* Since the TNT script employes little pronominalization, we often use deaccenting to 'intonationally pronominalize' repetitions of lex- ical items. Accent can also signal that a discourse referent other than that which would be 'most likely' without special accentuation should be sought, as in: I. 2. We can't answer questions, if you are confused. H* H* H* L H~ H* L L~ We have to le~ the computer do all the teaching. H* H* H* H* L LS Here (and in particular at line 1), we is intended to refer to the humans supervising the testing of the tutor, although these humans have not previously been mentioned in the script. How- ever, this reference might easily be interpreted as referring to the 141 tutorial system itself. Since pronouns -- as 'given' information -- are commonly deaccented, we accent this one to indicate that an 'unusual' referent should be sought, 10 So, both accent placement and manipulation of pitch range can be used to reorder the list of potential referents for a given referring expression. FinMly, eontrastiveness or parallelism may also be commun- icated via accent. For example, second is accented in 3, although it is certainly given in this segment (via mention of second draft in 1): I. Nned a second draft? L* L* L* H H~ 2. No problem. H* H* L L~ 3. Just change the first, and you've got the second. H* H* L H~ H* H* L L~ Note that, while second may be 'given' at the discourse segment level, the decision to accent it is based on contrast within a smaller context, 3. Furthermore, if this function of accent is ignored, con- trastiveness may be inferred incorrectly. If we accent we in the last line of the tutorial introduction WE will help you out, for exam- ple, the student would be entitled to infer that others will not be helpful. We are currently developing Mgorithms for determining accent placement, based upon the interaction of focus, given/new, parallelism, contrastiveness, and pronominal reference within seg- ment and phrase. 4.3 Choice of Tune It is now widely accepted that the overall melody a speaker employs in an utterance can communicate some semantic or prag- matic information. However, since there are few particular tune types for which we can specify with any confidence just what the meaning might be, it is difficult to generalize about what type of information tunes in generM can convey. From those tunes whose 'meaning' seems fairly well understood -- namely, declarative, yes-no question [23], surprlse/redundancy [10], contradiction contour [33] rise-fall-rlse [25], and continuation rise [34,35] contours -- we propose that tunes convey two sorts of information about discourse. First, we believe that contours can convey propositional attltudes n the speaker wishes to associate with the propositional content of an utterance. For example, the speaker may wish to convey that s/he knows x, or that s/he believes x, or that s/he is uncertain about x, or that s/he is ignorant of x. In the case of H*+L sequences, it appears that a speaker may convey his/her (propositional) attitudes about a hearer's (propositional) attitudes toward an utterances. This tune seems to indicate the speaker's belief that the speech act s/he is performing is superfluous. For example, a speaker may employ it to convey that the propositional content of his/her utterance is already known or would be obvious to the hearer (who, of course, may or may not be attending to it). Note that the speaker may or may not believe that this information is known, in order to wish to convey this meaning. Particularly in pedagogical texts, this contour seems appropriate to introduce straightforward material, as in the following instruction to hit the remind key. Remind, tells you again what to do If you forget. R* L H~ tt* H* H* H* L L~ Hit re~ind. H*+L H*+L L L~ However, an H*+L sequence is not appropriate in the following similar exchange: I0. The standard example of accentuation influencing pronominal reference resolu- tion in this way is'john hit Bill and then HE hit HIM 1321. 11. Propositional attitudes include knowing, believing, intending, uncertainty, and ignorance. Next, let's erase hello. H* L H~ H* H* L L~ Hit hint. H* H* L L~ In general, such contours do not seem felicitous when the utterance conveys information which the speaker believes will be unexpected for the hearer. Here tune choice may reflect atten- tional as well as intentional aspects of the discourse structure. Like the deaccenting of references to given items, tt*+L sequence contours seem to convey 'givenness' at a more general level. Second, we believe that tune can convey the speaker's com- mitment to some semantico-pragmatic structural relationship hold- ing between the propositional content of utterances (as, that one 'completes' another or is subordinate to another). Many such rela- tions have been proposed in textual analysis [36,37,15]. In the phonological literature, continuation rise has been commonly asso- ciated with some sense of 'continuation' or 'more to come' [34]. We have found, howe-rer, that this contour can be characterized more precisely as conveying a subordination relationship between the phrase uttered with continuation rise and other utterances in the discourse segment. For example, if the second phrase of line 1 is uttered with continuation rise, then this utterance appears to be subordinated to 2. I. We can't answer questions, if you are confused. H* H* H* L H~ H* L H~ 2. We have to let the computer do all the teachlng. H* H* H* H* L L~ 3. But if the computer is not working right, we will help H* H* L H~ H* H* you out. H* L L% That is, 2 'completes' I. Without continuation rise on 1, all three utterances will appear to have equal status in th'esegfiaent. Furth- ermore, continuation rise is not felicitous in ~.ll ~ontexts in which the simple sense that 'there is more to come' clearly should be appropriate; for example, continuation rise over 3 -- at the end of the tutorial introduction -- seems quite odd, even though more will clearly follow. In synthesizing the TNT script, we have employed only a small subset of possible English tunes. Analysis of the 'meaning' of additional tunes is part of our future research. More generally, we must examine how structural relationships conveyed by tunes such as H*+L sequence are ~.ssociated with those conveyed by pitch range. We have described certain mappings between intonational features and discourse phenomena, associating pitch range varia- tion with the identification of discourse segments and with their internal coherence; accent with types of information status such as topic (focus) and the given/new distinction, with reference resolu- tion and with contrastiveness; and tune choice with~ ~he, relation- ships among propositions in the discourse as well as w~l~b.,~ome pro- positional attitude the speaker wishes to associate with :those pro- positions. It appears that pitch range and accent placement are most closely associated with a diseourse's attentional structure, while tune choice is more closely associated with its intentional structure. However, clearly this picture is too simple. SeverM into- national features may be used together to create some discourse effect; moreover, in some cases two distinct intonational phenomena seem to produce discourse effects that seem intuitively to be closely related. And sometimes several discourse phenomena may indicate conflicting intonational strategies. These problems are the subject of our future research. 5. Discussion The central thesis of this work is that there are many ways in which intonation helps to structure discourse. By understanding the mapping between intonational phenomena and discourse phenomena, we can enhance both our ability to interpret what 142 speakers try to convey and to synthesize speech more effectively. We have described three major intonational phenomena -- pitch range, accent, and tune -- and some of the information they allow speakers to communicate about discourse, demonstrating some links between discourse and intonational phenomen~ which have not been noted in the literature and refining some notions which have. We also identify major issues which future research on the relationship between discourse and intonation must address, including a more precise mapping between discourse and intona- tional phenomena, the interaction of intonational phenomena to produce particular discourse effects, and the way conflict between intonational strategies signaled by various aspects of the discourse may be resolved. We are currently testing and refining our hypotheses by 1) pitch tracking recorded natural discourse to determine pitch range manipulation, and 2) conducting pilot empirical studies of how principled manipulation of pitch range can affect reference resolu- tion. We are also examining in some detail the relationship between pronominalization and deaccenting, pursuant to the development of better accenting algorithms for synthetic speech. Our ultimate goals are practical as well as theoretical. Once we have determined how particular intonational phenomena are related to particular discourse phenomena, the next step is to determine how these find- ings can be applied to natural-language generation. In particular, how much intonational structuring of generated text can be done automatically? What sorts of information must be represented to support the assignment of rhetorically effective intonation? ACKNOWLEDGEMENTS We would like to thank Lloyd Nakatani and Dennis Egan for help with TNT, Barbara Gross and Candy Sidner for useful discussions, Mary Beckman, Diane Litman, and Ken Church for comments on earlier drafts, and Mark Liberman for assistance with the TTS sys- tem and the development of its prosody. REFERENCES [1] Chafe, W., Givenness, contrastiveness, definiteness, subjects, topics, and point of view, in Subject and topic, ed. Li, C., Academic Press, New York (1976). [21 Schmerling, S., Presupposition and the notion of normal stress, Papers from the Seventh Regional Meeting of the Chi- cago Linguistic Society, Chicago, (1971). [3] Sehmerling, S., A re-examination ,of the notion NORMAL STRESS, Language 50 pp. 66-73 (1974). [41 Wilson, D., and Sperber, D., Ordered entailments: an alterna- tive to presuppositional theories, pp. 229-324 in Syntax and semantics 11, ed. Oh, C.-K., and Dinneen, D. A., Academic Press, New York (1979). [51 Gleitman, L., Pronominals and stress in English, Language Learning 11 pp. 157-169 (1961). [6[ Gundel, J., Stress, pronominalization, and the given-new dis- tinction, University of Hawaii Working Papers in Linguistics 10(2) pp. 1-13 (1978). [7] Jwekendoff, R. S., Semantic interpretation in generative gram- mar, MIT Press, Cambridge MA (1972). [8] Ladd, D. R., The structure of intonational meaning, Indiana University Press, Bloomington (1980). [9] Austin, J. L., How to do things with words, Clarendon Press, Oxford (1962). [10] Sag, I. A. and Liberman, M., The intonational disambiguation of indirect speech acts, Papers from the Eleventh Regional Meeting of the Chicago Linguistic Society, pp. 487-498 Chi- cago, (1975). [11[ [12[ [13] [141 [151 [161 [171 I18[ [19] [201 [211 [22[ [23] [24[ [25[ Sadock, J., Toward a linguistic theory of speech acts, Academic, New York (1974). Schlegoff, E. A., The relevance of repair, to syntax-for- conversation, pp. 261-288 in Syntaz and semantics 12: Discourse and syntax, ed. Givon, T., Academic, New York (1979). Brazil, D., Coulthard, M., and Johns, C., Discourse intonation and language teaching, Longman, London (1980). Butterworth, B., Hesitation and semantic planning in speech, Journal of Psyeholinguistie Research 4 pp. 75-87 (1975). Grosz, B. J., and Sidner, C. L., The Structures of discourse structure, 6097, BBN Laboratories Inc. (November 1985). Also appears as CSLI-85-39, as Technical Note #369 from the AI Center, SRI International, and will appear in Computational Linguistics, 1986. Nakatani, L., Egan, D., Ruedisueli, L., and Hawley, P., TNT: A talking tutor'n' trainer for teaching the use of interactive computer systems, To be presented the Conference on Human Factors in Computing Systems, April 13-17, 1986 (1986). Olive, J. P., and Liberman, M. Y., Text to speech -- An over- view, J. Aeoust. Soc. Am. Suppl. 1 78(Fall) p. s6 (1985). Levy, E. T. and Grosz, B., Communicating thematic structure in narrative discourse: the use of referring terms and gestures, PhD thesis, University of Chicago (1984). Reiehman, Rachel, Getting computers to talk like you and me, MIT Press, Cambridge MA (1985). Cohen, R., A computational model for the analysis of argu- ments, PhD thesis, University of Toronto (1983). Sacks, H., Sehlegoff, E., and Jefferson, G., A simple systemat- ies for the organization of turn-taking for conversation, Lanu- age 50 pp. 696-735 (1974). Anderson, Mark D., Pierrehumbert, Janet B., and Liberman, Mark Y., Synthesis by rule of English intonation patterns, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, pp. 2.8.1-2.8.4 San Diego, (1984). Vol. 1 Pierrehumbert, J., The Phonology and phonetics of English intonation, PhD thesis, MIT (1980). Liberman, M., and Pierrehumbert, J., Intonational invariants under changes in pitch range and length, in Language sound structure, ed. Aronoff, M., and Oehrle, R., MIT Press, Cam- bridge (1984). Ward, G., and Hirschberg, J., Implicating Uncertainty: The Pragmatics of Fall-Rise Intonation, Language 01(4) pp. 747- 776 (1985). [26] Winograd, T., Understanding natural language," Academic Press, New York (1972). [27] Prince, E. F., Towards a taxonomy of given-new information, pp. 223-256 in Radical pragmatics, ed. Cole, P., Academic, New York (1981). 12s[ [291 Sidner, C. L., Towards a computational theory of definite ana- phora comprehension in English discourse, PhD thesis, MIT (1979). Also appears as TR 537, MIT AI Lab. Gussenhoven, C., On the grammar and semantics of sentence accents, Foris, Dordrecht, Neth. (1983). Publications in Language Sciences, 16 143 \ [30] [31] [32] [34] [35] [36] I~7] Culieover, Peter W., and Rochemont, Michael, Stress and focus in English, Language 59(1) pp. 123-165 (1983). Clark, H. H., and Marshall, C. R., Definite reference and mutual knowledge, in Elements of discourse understanding, ed. Joshi, A., Webber, B., and Sag, I., Cambridge University Press, Cambridge (1981). Lakoff, G., Presupposition and relative well-formedness, pp. 329-340 in Semantics, ed. Steinberg, D., and Jakobovits, L., Cambridge University Press, Cambridge (1971). Liberman, M., and Sag, I., Prosodic form and discourse func- tion, Papers from the Tenth Regional Meeting of the Chicago Linguistic Society, pp. 416-427 Chicago, (1974). Bolinger, D., Intonation and its parts, Language 58(3) pp. 505-533 (1982). Bing, J., Aspects of English prosody, PhD thesis, University of Massachusetts at Amherst (1979). Reprinted by the Indiana University Linguistics Club, 1980 Mann, W. C., Moore, M. A., Levin, J. A., and Carlisle, J. H., Observation methods for human dialogue, RR/75/33, ISI (1975). McKeown, K., Generating natu.ral language text in response to questions about database structure, PhD thesis, University of Pennsylvania (1982). 144
1986
21
THE CONTRIBUTION OF PARSING TO PROSODIC PHRASING IN AN EXPERIMENTAL TEXT-TO-SPEECH SYSTEM ABSTRACT While various aspects of syntactic structure have been shown to bear on the determination of phrase- level prosody, the text-to-speech field has lacked a robust working system to test the possible relations between syntax and prosody. We describe an implemented system which uses the deterministic parser Fidditch to create the input for a set of prosody rules. The prosody rules generate a prosody tree that specifies the location and relative strength of prosodic phrase boundaries. These specifications are converted to annotations for the Bell Labs text-to-speech system that dictate modulations in pitch and duration for the input sentence. We discuss the results of an experiment to determine the performance of our system. We are encouraged by an initial 5 percent error rate and we see the design of the parser and the modularity of the system allowing changes that will upgrade this rate. INTRODUCTION We describe an experimental text-to-speech system that uses a deterministic parser and prosody rules to generate phrase-level pitch and duration information for English input. This information is used to annotate the input sentence, which is then processed by the text-to-speech programs currently under development at Bell Labs. In constructing the ,system, our goal has been to test the hypotheses (i) that information available in the syntax tree. in particular. grammatical functions such as subject-predicate and head-complement, is bv itself useful in determining prosodic phrasing for svnthetic speech, and (ii) that it ts possible to use a syntactic parser that specifies grammatical functions to determine prosodic phrasing for synthetic speech. Although certain connections between syntax and prosody are well-known (e.g. the influence of part of speech on stress in words like progress, or the setting off of parenthetical expressions) very little practical knowledge is available on which aspects of syntax might be connected to prosodic phrasing. In many studies, investigators have sought connections between constituent structure and prosody (e.g. Cooper and Paccia-Cooper 1980. Umeda 1982. Gee and Grosjean 1983) but, with the exception of Selkirk (1984). they tend to neglect the representation of grammatical functions in the svntax tree. Moreover, previous work has not been specific enough to provide the basis for a full system implementation. Based on our study of prosodic phrasing in recorded human speech, we Joan Bachenko Eileen Fitzpatrick C. E. Wright AT&T Bell Laboratories Murray Hill, New Jersey 07974 decided to emphasize three aspects of structure that relate to phrasing: syntactic constituency, grammatical function, and constituent length. These findings. which we will discuss in detail, have been implemented as a collection of prosody rules in an experimental text-to-speech system. Two important features characterize our system. First. the input to our prosody system is a parse tree generated by a version of the deterministtc parser Fidditch (Hindle 1983). The left-corner search strategy of this parser and, in particular, its determinism, give Fidditch the speed that makes online text-to-speech production feasible. 1 In building a parse tree, Fldditch identifies the core subject-verb- object relations but makes no attempt to represent adjunct or modifier relations. Thus relative clauses. adverbials, and other non-argument constituents have no specified position in the tree and no specified semantic role. Second. the rules in the prosody system build a prosody tree by referring both to the syntactic structure and to earlier stages of prosodic structure. The result is a hierarchical representation that supports the view, also proposed in Selkirk (1984). that grammatical function information is related to prosodic phrasin.g, but indirectly, through different levels of processing. Informal tests of the system show that it is capable of producing a significant improvement in the prosodic quality of the resulting synthesized speech, Our investigations of the system's problems, which we describe, have not revealed any serious counterexample to our basic approach. In many cases. it appears that problems with the current version can be resolved by taking our approach a step further, and including lexical information required by the parser as another factor in the determination of prosodic phrasing. TEXT-TO-SPEECH Most text-to-speech systems comprise two components: pronunciation rules and a speech synthesizer. Pronunciation rules convert the input text into a phonetic transcription; this information mav also be supplemented by a dictionary that provides information about the part of speech, stress pattern. and phonetic makeup of particular words. The speech I. With a ~rammar of about 600 rules and a lexicon of about 2400 words, "Fidditch parses the 25 sample sentences of Robinson (1982), averagin~ 7 words per sentence and chosen for their structural divers*t'¢, at an avera~hrate of .405 seconds per sentence on a Sv'mbolics 3670. ~ rate is approximately proportional to th~ number of words in a sentence. 145 synthesizer then converts this phonetic transcription into a series of speech parameters which are subsequently processecl to produce digitized speech. While these systems tend to perform quite well on word pronunciation, they fall short when it comes to providing good prosody for complete sentences. Current text-to-speech systems have no access to the syntactic and semantic properties of a sentence that influence phrase-level prosody. Hence rules for sentence prosody, when they are provided at all typically depend on superficial aspects of text (e.g. punctuation) and on heuristics that vary widely in sophistication. Although such techniques often add a more natural quality to the resulting synthetic speech, !hey .can fail in important ways, for example, by xgnormg the prosodic event between a lengthy subject and a predicate, so that there is no clear prosodic boundary between right and mark in The characters on the right mark the salient features. 2 Several authors (e.g. Allen 1976; Elovitz et al. 1976; Luce et al. 1983) have suggested that prosodic differences between synthetic and natural speech are the primary, unaddressed factor leading to difficulties in the comprehension of fluent synthetic speech. The relation between phrase-level prosody and its sources, however, is so poorly understood that we have no good sense of the degree to which different levels of explanation--syntactic, semantic, or pragmatic--are applicable. We currently have reasonable tools for automatic syntactic anal~,sis of a text. but there is nothing .equivalently well-developed for semantic or pragmatic textual analysis. Thus an obvious goal is to explore the extent to which phrase-level prosody can be explained by the syntax tree and develop a detailed description of that relation. A further goal is to convert the resulting insights about this relation into a system that can work with a speech synthesizer. This allows us to test our description more adequately and perhaps also produce something that will further text- to-speech technology. SYNTACTIC STRUCTURE AND PROSODIC PHRASING Certain relations between syntax and prosody. especially at the word level, are well-known. For example, the syntactic category of a word may affect its phonetic realization, as in the verb/adjective distinction of separate, approximate, and the verb/noun distinction of house, wind, lives. Likewise, syntactic category affects word stress, so that verbs such as progress, insert, object, and rebel receive final stress, whereas the corresponding nouns receive penultimate stress. Beyond the word level, however, there has been little investigation of systematic connections between syntactic structure and prosodic phrasing. The psycholinguistic and acoustic investigations of Cooper and Paccia-Cooper (1980), Umeda (1982) and Gee and Grosjean (1983)and the prosodic theory of Selkirk (1984) are among the more notable studies and represent the two main approaches to syntax/prosody 2. Note that without a syntactic anal,,sis that correctly identifies ~rammatical functions, it is impos'sible to determine whether tlae word mark is a noun ending the subject phrase or the verb of the predicate phrase. Simple 'surface" parsers, such as that described in Umeda and Teranishl (1974l. will still fail to identify, the prosodic boundar.~ correctly.. relations. In Cooper and Paccia-Cooper (1980) and Umeda (1982), the connection from syntax to prosodic phrasing is unmediated by any filtering process, i.e.. they propose that the details of prosodic phrasing can be determined directly from syntactic structure by associating particular syntactic nodes (or constituent boundaries) with a phonetic value, either pausing, segmental lengthening, or the blocking of the cross- word conditioning of phonological rules. By contrast, Gee and Grosjean (1983) and Selkirk (1984) believe that the syntax-prosody relation is indirect: prosodic phrasing is derived by rules that refer to left-to-right ordering, length (or branching patterns), and, in the ca~e of Selkirk. grammatical function, as well as constituent membership in order to infer a hierarchical prosodic structure. But while their respective positions are quite clear, none of these studies is conclusive. All lack a syntactic framework sufficiently detailed and formalized to allow extensive testing, and most consider 9nly a small number of sentences and sentence types?. To develop our analysis, we first examined prosodic phrasing in the speech of one of us reading prose from various texts, including four instruction manuals. These texts were later augmented by a ~ rofessional reading of a prose story. The boundaries etween prosodic phrases were identified and then classed according to their syntactic context and semantic function. Our results, which are outlined below, indicate an organization of the prosodic phrases that supports the 'indirect relationship' approach of Gee and Grosjean (1983) and Selkirk (1984). We found that, in our corpus, prosodic phrasing depends on three aspects of structure: the breakdown into syntactic constituents, the .grammatical function of a constituent, and constxtuent length, Let us review each of these factors. Syntactic Constituency. The possible constituents recognized by our parser are Noun Phrase (NP). Verb Phrase (VP). Adjective Phrase (AdjP), Adverb Phrase (AdvP), and Prepositional Phrase (PP). In general, we found that syntactic constituency is partxcularly important for predicting points at which a prosodic phrase boundary is not produced, i.e., the words within a syntactic constituent cohere. For example, the italicized phrases in (1)-(5) had no perceptible boundaries at the locations indicated by #: (1) Left-hand # power unit is connected ... (2) This procedure shows # you ... (3) An extremely # narrow opening ... (4) To spread powerload more # evenly (5) ... next # to any powered di-group The single exception to word cohesion within syntactic 3. Gee and Grosjean (1983) use a corpus of 14 sentences. Umeda (1982) considers a large corpus but. like Gee and Grosjean. does not distinguish among grammatical functions Althou~_h Selkirk cites r~any exam~lgs in her discussionsof phra~'al stress and word-level prosody, her description of prosodic phrasing focusses on only a single example. 146 constituents involved boundaries between the verb and its first or second object when the object in question was lengthy. We discuss this exception below. Grammatical Functions. Our sample indicated that phrase boundaries are also determined by the grammatical relations among the syntactic constituents, i.e. the argument structure of the sentence. Four grammatical relations concern us: (a) subject-predicate, as in The 48-channel module -- has two di-groups. (b) head-complement, where the head can be a noun, verb, or adjective and may have one complement, e.g. has -- two di-groups, or two complements, e.g. shows -- you -- how to fly your kite. (c) sentence-adjunct, as in Insert unit into correct shelf location -- per detail instructions. (d) head-modifier, where the head can be a noun, verb, adverb, or adjective and the modifier can be one of several things, depending on the head (e.g., for nouns, the modifier can be a relative clause; for verbs, it can be a prepositional phrase; for adjectives and adverbs, the modifier can be a comparative). We observed a hierarchy among these relations with respect to the strength, or perceptibility, of a prosodic boundary, with the boundary between sentence and adjunct receiving the highest potential boundary strength, followed by the subject-predicate boundary, then the head-complement and head- modifier boundaries. Thus in (6), there is a strong boundary between subject and predicate, whereas in (7), due to the strong boundary between adjunct and core sentence, the subject-predicate boundary diminishes. (Dashes indicate the location of the boundary being discussed.) (6) The name of the character -- is not pronounced. (7) When this switch is off -- the name of the character is not pronounced. Constituent Length. While we may view each boundary as having an intrinsic strength based on constituency and grammatical function, the determination of actual strengths appears to depend on the interaction of the intrinsic strength of a boundary with the strengths of other boundaries in the sentence, as well as the distance between these boundaries. The most salient of the interactions we observed was between the placement of a boundary at the subject-predicate junction and the placement of a boundary following the verb-complement junction. The mediating factor in this interaction was the relative length of the subject with respect to the length of the verb's complements. Thus a sentence such as (8). with both a short subject and a single short object generally is produced without a boundary in either position. (8) You have completed the task. But if, as in (9), the subject is long relative to the object, then a break occurs between the subject and predicate. Conversely, if the subject is short relative to the object, then a break will occur between the verb and the object, as in (10). Or, if there are two objects and the first is simple, the break will occur between them, as in (11). (9) The materials required -- are one kite kit. (10) How shall we judge -- the goodness of an algorithm? (11) This procedure shows you -- how to fly your kite. AN EXPERIMENTAL PROSODY SYSTEM Our findings confirmed that syntactic structure plays a major role in determining prosodic structure, but the relationship is indirect--the exact influence of syntactic constituency varies according to the length and grammatical function of each constituent. To refine and test this idea, we implemented an experimental text-to-speech system in which rules apply to a parse tree to infer prosodic structure and then annotate the input string with phrasing information derived from the prosodic structure; this annotated input string is submitted to the Bell Labs text-to-speech programs, which convert it into a speech file. Our system comprises three components: a parser that builds syntactic structure, rules that derive prosody information from the syntactic structure, and the Bell Labs text-to-speech programs. The parser and speech programs are independent components. The prosody rules act as a filter between them, converting the syntactic information generated by the parser into prosodic information that can be supplied to the text-to-speech programs. Parsing. Our parser is a version of Fidditch (Hindle 1983), a moderate coverage parser based on the deterministic model described in Marcus (1980). To build syntactic structure, Fidditch uses a grammar that requires the representations produced by lexical and syntactic rules to be consistent with the (semantic) predicate- argument structure. The surface syntactic structures generated by the parser represent the argument structure of a phrase or sentence, i.e. the "core" constituents of a sentence (its subject (NP), modality (AUX), and predicate (VP)) and the complements of phrasal heads. The structure is determined, for the most part, by rules that refer to argument information that is specified in the lexicon for the content words !nouns, verbs, adjectives, adverbs), and by rules that insert null terminals such as the "trace" of wh- movement. In general, the grammar is consistent with the government and binding framework of Chomsky (1981), as adapted to the needs of a parser. The input to the parser is a phrase or sentence (punctuation is optional). Its output is a surface structure tree in which the status of a constituent with respect to the predicate-argument structure of the sentence is indicated by the constituent's attachment to higher nodes in the tree. Thus only constituents that belong to the core are attached to the S node, and only complements of a phrasal head can become righthand sisters of the head. Adjuncts and modifiers. 147 whose role depends on semantic and pragmatic information about the discourse domain, have no assigned position within a structure and so are represented as "orphan" nodes in the tree. For example, Figure 1 shows the parse tree for Left-h'and power unit on each shelf in 48-channel module can power only the echo cancelers that are in that shelf. 4 The structure in Figure 1 contains a single core sentence -- unit can power the cancelers -- with left- branching modifiers -- left-hand, power, and echo. The sentence also contains three modifiers -- the PPs on each shelf and in 48-channel module, and the adverb only -- which are unattached constituents. This is the significance of the unlabeled node dominating each of these constituents. The PPs are not attached because unit is not lexically marked to take a PP headed by on or in as a complement, and shelf is not lexically marked to take a PP complement headed by in. Nor is any constituent lexically marked to accept onh' as an argument. Figure 1 also contains a relative clause, that are in that shelf. In the relative clause, T is a null terminal that stands for the trace of the relativized subject NP; the * in tense stands for a null Aux element. Because nouns do not select relative clauses as arguments (any noun can be relativized), the parser does not identify the relations of the modifier constituent to the elements of the core sentence. Hence the relative clause is not attached to any other syntactic node in the tree. Text-to-speech Synthesis. The programs that make up the speech component are described in Liberman and Buchsbaum (personal communication). These programs take English text as input and produce digitized speech output. By annotating the input text to this system, many aspects of its operation can be overridden or modified: e.g. the location of major and minor phrase boundaries, the stress given to words, the transcription of words and the boundaries between them, the timing of segments, and details of the pitch contour. As we will show, with our prosody system we are able to produce strings in which four boundary levels are identified and perceptually distinguished, using the current text- to-speech system annotations. Prosodic Phrasing. The prosody rules use information about constituent structure, grammatical role, and length to map a surface structure such as that in Figure 1 onto a prosody tree such as that in Figure 2. The prosody tree identifies the location of phrase boundaries (signified by the • nodes) and the relative strength of each boundary (signified by a number in the • node). It is this information that is used to annotate the input text with escape sequences that provide the text-to- speech system with instructions about prosodic phrasing. In formulating our rules for building the prosodic structure, we began with the idea of simply implementing the model of Gee and Grosjean (1983). This model, initially proposed to predict a form of psychological data describing subjective sentence structure known as performance structure, determines prosodic boundaries from a syntactic tree, but assumes rather than explicitly presents a syntactic component. We were initially attracted to the Gee and Grosjean model because of its emphasis on relative boundary weighting, i.e., on the determination of the strength of a given boundary with respect to the other boundaries in the sentence. We found that in the data we had collected, this weighting played an important role. In fact, we incorporated directly into our system one method of doing this weighting, namely Gee and Grosjean's rule to determine the strengths of the prosodic phrase boundaries around a verb using relative length (as measured by terminal node count). As we extended Gee and Grosjean's model to create an algorithm adequate for use in a general purpose system, our algorithm diverged from its starting point, reflecting our attempts to correct weaknesses and lacunae that we encountered in the Gee and Grosjean model. That we encountered these problems is not surprising given the difference between our goals and those of Gee and Grosjean. The most important difference between the Gee and Grosjean model and our current algorithm involves the factors determining boundary weight. Gee and Grosjean assume that this weighting is dependent only on the number of syntactic nodes, their left-to-right ordering and, in the case of the verb phrase, on constituent length. In contrast, our data, in agreement with Selkirk's (1984) theoretical analysis, indicated that boundary strength is dependent on the grammatical functions that the constituents in a given sentence play. In particular, we observed a hierarchy among these functions with respect to boundary strength, as discussed below. 5 In addition to incorporating grammatical function information into our system, we fleshed out the model of Gee and Grosjean to deal with syntactic structures that they do not explicitly consider. In particular, Gee and Grosjean's strictly left-to-right building of the 5. As an example of the effect that grammatical functions have on prosodic phrasing, consider the sentence Finalh" the strange young man left. We view this sentence as consisting of two lgrammatical relations: subject-predicate and adjunct-sentence. m our hierarchy of grammatical relations, the boundary between the adjuhct and the sentence is more salient than the boundary between the subject and the predicate. The system reflects this by assigning a stronger boundary following Finally than following man. If we exclude any effects of grammatical functions and assume a simple l.eft-to-right attachment of the three constituents Finally, the stranee voune man and left, to the prosody tree,.we ~,ould assigr/ a -strofiger boundary following manGr . . . . . . man Imiowing Finally. It is not .clear that Gee and oslean make this lett-to-rlght assumption in such examples. They view adverbial phrases-like Fina[Iv as dominated by the comi~lementizer node in the s)ntax tree. and it is difficult to determine .whether the)' integrate the material in the comptemennzer Wltla the material in the core sentence as they are analy.zing the material in the core bentence or after that analysis IS completed. If they integrate the complementizer with the core sentence, then they assume that Finally bundles with the sentence in a left-td-right manner and- predict, incorrectly, that the stronger boundary occurs after man. If they complete the prosodic analysis of the core sentence before bundling the sentence with the complementizer, then they incorrectly predict that there is a strong boundary after wh- phrases in'the complementizer. In particular, they would incorrectly predict that in sentences like At the outset what problems diayou expect the most perceptible boundary would be after problems. Furthermore, assuming that an adjunct in sentence-initial position is dominated b~ the complementizer node and in sentence-final position "by S-bar creates an inconsistent description, which hampe?s the ~alue of the model as an experimental tool. 148 prosodic tree left certain questions open, For example, their model does not deal with sentences embedded in the middle of a main sentence (as-in The notion [that he would refrain from such an act] was incorrect.) We incorporate embedded sentences into the prosodic tree in a cyclic manner to insure that the material in the embedded sentence is processed before that in the main sentence. 6 In addition. Gee and Grosjean leave open the treatment of the multiple rightward embedding of non-sentential constituents, e.g., the NP embedding in The destruction of the good name of his father. Our approach is to handle these cases recursively, from the most deeply embedded phrase up, in order to preserve the prosodic cohesion of the entire NP. Our adjunction rules are derived for the most part from Selkirk's account. We have also made use of the idea, which Gee and Grosjean ([983) take largely from the work of Selkirk, that certain syntactic heads mark off phonological phrase boundaries, and provide the basic prosodic constituents for higher level analysis. Our prosody rules run in four independent stages. Each stage builds on the previous stage, so that the rules can refer to both syntactic and prosodic structure as they build successively higher levels of prosodic structure. (i) Adjunction Rules combine orthographically distinct words into phonological constituents with no internal word boundary, They join a word to its left or right neighbor depending on (a) the category of the word, and (b) its structural relation to other words. In general, adjoinable words are the function words-- articles, complementizers, auxiliary verbs, conjunctions, prepositions and pronouns (except for the "strong" possessives, mine, hers, theirs, yours, ours, which are treated as regular NP's). Adjunction occurs six times for the sentence in Figure 2 to create six multiple word groups, all right- adjoining: on each, in 48-channel, can power, the echo, that are and in that. These groups of adjoined words appear as terminals in the prosody tree in Figure 2. In subsequent processing the boundaries between the words in these groups are marked so that the text-to- speech system does not produce the prosodic indications of a word boundary. In addition, these groups are treated as single words in further analyses. (ii) ~-phrasing Rules construct phonological (or 6p) phrases, which are the building blocks of the prosody tree. These rules identify groups of words that cohere strongly in speech and thus should not be separated by phrase boundaries. In the present implementation, each • phrase is constructed by a left-to-right process that collects the words formed by adjunction until it reaches a noun or verb. At this point, a • phrase is created that consists of the collected words plus the noun or verb, which acts as head of the phrase. For example, in that shelf, in Figure 2. is a single • phrase consisting of two words. In Figure 2, the • nodes marked with a syntactic category are the minimal phonological constituents with respect to later rules that build the prosodic s. Having taken this strona approach, we now understand the limited exceptions to this~mechanism, which we discuss below'. phrases; these @ phrases have an internal structure, but the structure plays no role in further processing. Note that neither adjectives nor adverbs are allowed to be the head of a • phrase, so that three additional open slots is a single • phrase consisting of four words. Examples such as Someone tall walked into the room, however, suggest that our treatment of these categories is not detailed enough and that, in future versions of the system, some adjectives and adverbs should act as • heads. (iii) Prosody-phrasing rules use information about phrases and syntactic structure to create a new organization of the sentence and to assign strength values to the boundaries between successive • phrases. The process of building the prosody tree starts with the sentence node (S or Sbar) that is most deeply embedded in the utterance, transforming it into a prosody subtree. This process continues through successively higher levels of sentence nodes until all top-level sentences have been transformed into prosody subtrees. All the processing of each successive sentence is done before the relation of the sentences to each other is considered7 Within a sentence, the • phrases are processed from left to right. This stage of the analysis uses a window that allows access to three adjacent nodes. Pattern-action rules, which are described below, apply to the nodes in the window and build prosody subtrees that replace the syntax nodes. These subtrees are headed by a • node containing a number that represents node count; the number is determined by counting the number of nodes contained in the prosodyasubtree, plus 1 for the • node that heads the subtree. In general, the prosody phrase rules do three things: (a) Balance prosodic phrases by referring to constituent length. This rule only applies for building the prosody subtree that contains the verb. If the node count for subject plus verb is less than the node count of the verb's complement, then subject and verb are grouped together in a prosodic subtree; this gives the phrasing in The characters on the right -- mark the salient features. Otherwise, the verb is grouped with its complement in a prosodic subtree; an example of this grouping is the subtree for can power only the echo cancelers in Figure 2, (b) Combine the • phrase daughters of the major constituents, excluding VP, into a prosodic subtree. At present, this rule only applies to NP and PP since adjectives and adverbs are currently not treated as @ heads. For example, the name of the character, which forms two d~ phrases under NP (the name and of the character), become a single prosody phrase that replaces the NP. 7, We have found at least one class of phrases for which this order of processing appears inappropriate. In these, the head of the top-level phrase is epistemlc -- e.g., believe, know, belief, knowledge -- andits complement is a sentence. In most cases, the current processing order for embedded sentences will produce a break between a head and a following embedded sentence. For this class of sentences, however, thd break does not seem to be appropriate. "~Vhile it wot ld be straightforward to handle this as an exception, we are currently examning whether there is a more principled wa? to describe what must be done in these cases. s Onl,~ the top-level • nodes, those which contain the head of the ~ ntactic phrase, are counted in computing the node count. LnU~,~'- ~y~:Lv~ .... ~am~lev • in Fi,,ure -, "~ the sub-phrasal branching' ot" Left-hand and power unit c~oes not contribute to the node count. 149 (c) Bundle together prosodic constituents (~ phrases) from left to right if no other rules apply. This rule integrates the constituents left unattached by the parser into the prosodic structure. It accounts for the prosodic structure of left-hand power unit on each shelf in 48-channel module in figure 2, which is formed by first bundling left-hand power unit with on each shelf, into q~-3, and then bundling the result with in 48-channel module into ~-5. The final application of bundling replaces the Sigma node with the top level prosody node, which is q5-13 in Figure 2. (iv) Prosody conversion rules map the boundary strength indices onto three phonological mechanisms. Boundary indices in the low range, e.g. the ~-3 nodes in Figure 2, are realized as a phrase accent (Pierrehumbert 1980). Mid-range indices such as ~-5 and ~-9 in Figure 2 are realized as changes in pitch range. High indices are realized with modulations in both pitch range and duration. Thus the hierarchical organization of a structure such as that in Figure 2 can be reflected directly in the synthesized speech. PHENOMENA NOT TREATED Several phenomena have been omitted from this preliminary version of the system. Some of these omissions arise from the fact that we concentrated on sentence analysis rather than discourse analysis. Others involve phenomena that characterize spoken English, and thus did not occur in our original corpus of technical repair manuals. Contrastive stress is an example of prosodic phrasing based on discourse analysis. In our system's analysis, the phrase from India does not receive contrastive stress in (12). (12) Passengers from several countries entered the terminal. Finally a man from India walked in. In designing the current system, we have concentrated on the level of sentence analysis. Handling the contrasts involved in data like (12) necessitates an additional level of discourse analysis. In addition, the system never explicitly manipulates segment durations or overall speech rate. For example, we have vet to explore whether lengthening of the segment before a mid-range boundary value is appropriate, or whether increasing the duration of constituents of the core sentence might enhance the natural sound of the system. RESULTS AND FUTURE RESEARCH To date. our system has been tested systematically on a set of 39 sentences, and its performance has been observed less formally on a set of approximately 300 sentences. 9 The test corpus covers a repair manual for telephone switching systems and an introductory description of the Prose 2000 text-to-speech system. We added sentences cited in Umeda (1982) and sentences that we composed in order to extend the range of syntactic constructions represented in the test. In general, we have observed a significant improvement of prosodic quality in those test 9 The 39 sentences are listed in the appendix to this paper. sentences where the parser and the prosodic component have returned acceptable results. We have observed problems, however, especially in the formal test corpus, much of which we chose for its potential difficulty. Of the 39 test sentences, 38 parsed correctly. Of these, the prosodic component returned 26 sentences with a complete set of acceptable prosody markings. In terms of actual markings, the system marked 393 prosodic events, of which 21 markings were unacceptable. We can attribute errors in those sentences with unacceptable prosodic markings to three distinct problems discussed below. Complement Sentences. Five of the errors that arose from the prosody system's treatment of the test corpus result from the fact that the system sets off all subordinate sentences, including complement sentences, from the main sentence. Informal testing of the productions of four informants on the relevant data indicated that this approach works correctly for complement sentences such as (13)-(16). (Complement sentences are italicized): (13) Health services cautioned Western residents -- that they should ask where their watermelons come from before buying. (14) We have to satisfy people -- that the crisis is past. (15) The vendors explained -- that this is the result of illness among 281 people who ate pesticide- tainted watermelons. (16) Watermelon growers wonder -- whether this will continue throughout the rest of the season. However. the informant test consistently indicated that the complement sentences in (17)-(19)" are not set off by a comparable boundary: (17) They believe California sales are still off 75 percent. (18) They think the Southeast is shipping half its normal load. (19) Growers and retailers claimed the incident hurt sales across the USA. Cases like (17)-(19). in which no break is perceived between the verb and its complement sentence, form a syntactically distinct class in Fidditch. This class is characterized by the fact that the verbal head in each case is one that does not require that its complement sentence begin with a complementizer (either that, for, or a wh- word). The class includes epistemic verbs, like those in (17)-(19), as well as a wide range of verbs that take either tensed sentences, or various types of non-tensed sentences as complements) ° The examples (20)-(26) demonstrate the range of this class (complement sentences are italicized): l0 Fidditch, in followin~ the outlines of Chomskv's (1981) Government and Binding theory, assumes that propositions, i.e., those elements that cBntain k]oth a prkdicate and a perhaps null subject, are syntactically represented as sentences, regardless of tensing. 150 (20) We had the ship's forces make temporary repairs. (21) We saw the crew repairing the unit. (22) He wants the units repaired by the ship's force. (23) The construction of the unit makes detailed investigation impractical. (24) Try to give the names of the characters in advance. (25) They will help finish the job. (26) The new equipment will facilitate making repairs. Sentence-Final Constituents. Fifteen of the errors that arose from the system's treatment of the test corpus result from a high boundary value that sets final constituents off from the main sentence. The high value is due to the system's purely left-to-right attachment of syntactically unattached constituents (see rule iii.d above). The high boundary value is acceptable in sentences like (27)-(29). (The relevant final constituents in these examples are italicized). (27) In these instances it may be desirable to use phoneme characters instead of text characters to represent a word -- each time it appears in the input text. (28) Phonemic characters can also be used to handle syntactic data such as boundaries -- which can improve speech quality. (29) We were unable to finish the work -- due to equipment failure. However. the high boundary value sets the final constituent off unnaturally from the main sentence in data such as (30)-(32). (30) The method by which you convert a word into phonemes is provided -- in Chapter 7. (31) The experimenters instructed the informant to speak -- naturally. (32) We discussed the techniques -- we had implemented. In many cases it appears that the grammatical relation of the final constituent to the rest of the sentence determines the boundary value that sets off this constituent. In particular, sentence adjuncts, which bear no relation to any single item in a sentence, are set off by a minor phrase boundary. whereas final constituents that modify a particular item are less perceptibly set off. This is the distinction between the final constituents in (27)-(29), which are adjuncts, and those in (30)-(32), which are modifiers. However, while the distinction between the grammatical relations of the core sentence (complement and subject) and those of the periphery (adjunct and modifier) is fairly straightforward, and handled directly bv the mechanisms of the Fidditch parser, the distinctions between the peripheral elements of adjunct and modifier are complex and require the addition of costly mechanisms. The cost of adding adjunct/modifier distinctions is illustrated by the ambiguity that arises when both adjunct and modifier readings are possible. For example, on one reading of (31), naturally modifies the verb speak; i.e., the informants were to speak in a natural manner. On the other reading, naturally is an adjunct equivalent to of course. (To see this meaning more clearly, consider the rearrangement of this sentence with the adjunct at the beginning: Naturally, the3: instructed the informants to speak.) The context of speech analysis prefers the former reading. However, the net benefit of adding sophisticated contextual analysis to our system, if attainable, is, at best, unclear. The same may be said of adding selectional restrictions, or detailed information on logical form. In contrast, a finer treatment of local syntactic constraints on boundary values preceding final constituents is within reach. From the data we have examined, it appears that the character of the prosodic event before the final constituent can be locally determined to a great extent. For the most part. this determination depends on the category type of the final constituent and on the contents of the leading edge of the constituent. For example, interjections (however. moreover, therefore, alas, thus, of course, etc.) and sentence adverbs (apparently, generally, luckih' etc.) are uniformly set off by a high boundary value and should remain so. In contrast, the boundary value of final prepositional phrases, particularly those with a monosyllabic preposition (in, on. at, to. with, for) as 11 the left edge of the phrase, should be reduced. We are currently engaged in categorizing the constituent types and left-edge items that characterize final constituents with respect to the prosodic event that precedes them. Alternatively, we are considering the play-it-safe approach of reducing the high boundary values that set off final constituents to mid-boundary values. Currently these values are converted to a downstepping feature. This approach may also be useful in conjunction with our local determination approach for those constituents whose status is either undecidable or ambiguous under the latter approachJ ~ 11. In this view, expressions such as in principle, iJ~ eenerul, in particular, in consideration of, etc. must be treated like interjections. 12. Reducing the final boundary ~alue leaves ambiguities unresolved. For sentences such as (i! and (ii), below, we believe this lack of resolution is appropriate: (i) John saw a ~irl in the park with a telescope. park.liThe telesccTpe is witli John or the girl. or it's in the (ii) I need a woman to fix the sink. [I need a woman so that I can fix the sink. I need a woman who can fix the sink.] Our view, following. _Marcus. and.. Hinde (p.e.) is that in normal, spoken Enghsh, such ambl~ulnes are not processed unless the speaker or listener is directly questioned re~,arding the ambiguity, .... Likewise. the. _pr~osodic events . ~hat. mi g ht dlsamblguate are inappropriate unless such questioning occurs. Other cases are less clear. For example, it is difficult to imazine that, in (28) the difference between the readin~ of the whic~'h clause as a sentence adjunct and as a noun~phrase modifier on boundaries is not processed. We would hope that in such cases some local distinction, such as the presence or absence of the comma in (28), obtains. 151 k ! Sentence-Initial Constituents. When a sentence contains both sentence-initial and sentence-final adjuncts, the sentence-initial adjuncts will be less prominently set off than the sentence-final adjuncts due to the left-to-right attachment of adjuncts to the prosodic tree (see rule iii.b above). In data like (33), however, a more appropriate rendering would have the boundary after the adjunct 011 a clear day be strong relative to the boundary before the adjunct as it rises over the mountains. (33) On a clear day you can see the sun as it rises over the mountains. While it would be trivial to increase the value of the pertinent boundary, we are as yet unsure what the critical features are which require a more perceptible boundary. For example, while a higher boundary value after the prepositional phrase in (34) might b'e acceptable, it is not clear that it is necessary: (34) In the morning John left. Given the stylistically distinct nature of this data, we have not yet considered this question in detail. Summary. While we have systematically tested our system so far on a small set of examples, the number of prosodic events involved in those examples, 393. is high, due to the length of the sentences tested. We find the 5 percent error rate, representing 21 prosodic events, encouraging at this stage in the development of the system. In addition, we have delimited the problem areas of an approach that relies solely on information available in the syntax tree. Our initial investigation of these problems indicates that at least part of the necessary information about phrase-level prosody is conveyed in the lexicon per se. Additionally, due to the left-corner orientation of the Fidditch parser, which exists independently to optimize search strategies, the necessary lexical information is made easily available. CONCLUSIONS We have described an on-line experimental system that uses prosody rules to infer prosodic phrasing from constituent structure, grammatical functions, and length considerations. The system contains three modules: a deterministic parser, a set of prosodic phrasing rules, and an algorithm to convert the output of the prosodic phrasing rules into signals for the Bell Labs text-to-speech system. In developing the experiment, our intention was to build a working system that would allow us to test various hypotheses about the connections between syntax and prosodic phrasing in human speech and to upgrade the prosody of existing synthetic speech. The modularity of our system enables us to alter each module independently in order to test different hypotheses. For example, the parser can be altered to reflect the difference between verbs that require a complementizer before a sentential complement and those that do not. 13 This alteration is independent of 13. Fidditch represents this as a difference in the level of the com- plement sentence. Verbs that require a complementizer take an S-bar complement, while verbs that do not require a com- plementizer take an S complement with an optional that preceding. the workings of the prosody system or the prosody conversion rules. The existence of this prosody system makes the problem areas in the syntax-prosody relation more tractable by allowing online testing of a large body of data. For example, the prosodically different character of the two classes of complement sentences discussed above became apparent after several examples from each class were run through the system. We therefore feel we have built a tool that will aid in designing better approximations of sentence prosody as it relates to syntacnc structure. REFERENCES Allen, J. 1976. Synthesis of speech from unrestricted text. Proceedings of the IEEE, 4, 433-442. Chomsky, N. 1971. Lectures on government and binding. Dordrecht: Foris Publications. Cooper, W. and J. Paccia-Cooper. 1980. Syntax and speech. Cambridge, MA: Harvard University Press. Elovitz, H., R. Johnson, A. McHugh, and J. E. Shore. 1976. Letter-to-sound rules for automatic translation of English text to phonetics. IEEE Transactions on Acoustics, Speech, and Signal Processing, 6, 446-459. Gee, J. P. and F. Grosjean. 1983. Performance structures: a psycholinguistic and linguistic appraisal. Cognitive Psychology, 15, 411-458. Hindle. D. 1983. User manual for Fidditch, a deterministic parser. NRL Technical Memorandum #7590-142. Luce, P.A., Feustel, T.C., and Pisoni, D.B. 1983. Capacity demands in short-term memory for synthetic and natural speech. Human Factors, 25, 17-32. Marcus, M. 1980. A theory of syntactic recognition for natural language. Cambridge, MA: MIT Press. Pierrehumbert, J. B. 1080. The phonetics and phonology of English intonation. Ph.D. Dissertation, MIT. Selkirk, E. O. 1984. Phonology and syntax: the relation between sound and structure. Cambridge, MA: MIT Press. Umeda, N. 1982. Boundary: perceptual and acoustic properties and syntactic and statistical determinants. Speech and Language, 7, 333-371. Umeda, N. and R. Teranishi. The parsing program for automatic text-to-speech synthesis developed at the Electrotechnical Laboratory in 1968. IEEE Transactions on Acoustics, Speech, and Signal Processing, 23, 183-188. APPENDIX: TEST SENTENCES 1. THE NAME OF THE CHARACTER IS NOT PRONOUNCED. 2. LEFT-HAND POWER UNIT ON EACH SHELF IN FORTY-EIGHT CHANNEL MODULE POWERS ONLY ECHO CANCELLERS IN THAT SHELF. 152 3. THE CONNECTION MUST BE DETERMINED FOR THE LEFT-HAND POWER UNITS ON EACH SHELF. 4. THE CONNECTION MUST BE DETERMINED FOR THE LEFT-HAND POWER UNITS WHICH ARE ON EACH SHELF. 5. THE METHOD BY WHICH ONE CONVERTS A WORD INTO PHONEMES IS PROVIDED IN CHAPTER 7.14 6. WE DISCUSSED THE TECHNIQUES WE HAD IMPLEMENTED. 7. THE TECHNIQUES WE HAD IMPLEMENTED WERE TESTED ON A LARGER MACHINE. 8. THE MAN WHOM WE SAW YESTERDAY LIVES FAR AWAY FROM HERE. 9. THEY TOLD HIM TO WALK SLOWLY. 10. THE DESTRUCTION OF THE GOOD NAME OF HIS FATHER BOTHERED HIM. 11. LATELY HE HAD HAS CONTROL OVER THE SITUATION. 12. I NEED A WOMAN TO FIX THE SINK. 13. JOHN MET A WOMAN HE THOUGHT HE LIKED. 14. THE WOMAN I SAW CAME FROM HERE, 15. IN THESE INSTANCES IT MAY BE DESIRABLE TO USE PHONEME CHARACTERS INSTEADOF TEXT CHARACTERS TO REPRESENT A WORD EACH TIME IT APPEARS ON THE INPUT TEXT. 16. PHONEME CHARACTERS GIVE MORE CONTROL OVER THE PARTICULAR SOUNDS THAT ARE GENERATED. 17. THE MATERIALS REQUIRED ARE ONE KITE KIT. 18. PHONEMIC CHARACTERS CAN ALSO BE USED TO HANDLE SYNTACTIC DATA SUCH AS THE BOUNDARIES WHICH CAN IMPROVE SPEECH QUALITY. 19. IT MAY BE DESIRABLE TO GIVE JOHN A HAND. 20. AFTER THESE QUESTIONS, A DETAILED DESCRIPTION OF THE USE OF PHONEMES WILL BE PROVIDED IN CHAPTER 7. 21. THE ENGLISH THAT IS SPOKEN IN AMERICA AT THE PRESENT DAY HAS RETAINED A GOOD MANY CHARACTERISTICS OF EARLIER BRITISH ENGLISH THAT DO NOT SURVIVE IN BRITISH ENGLISH TODAY. 22. PHONEMIC CHARACTERS CAN ALSO BE USED TO HANDLE SYNTACTIC DATA SUCH AS THE LOCATION OF THE ENDS OF PHRASES WHICH CAN IMPROVE SPEECH QUALITY. 23. THE STUDENTS CONSIDERED THE ASSUMPTION THAT A BREAK MIGHT OCCUR. 24. FINALLY YOU MUST ASSUME THAT YOUR CIGARETTES WILL BOTHER THE PASSENGERS, 25. TRY TO GIVE THE NAMES OF THE CHARACTERS TO JOHN, 26. I PREFER FOR HIM TO GIVE THE NAMES OF THE CHARACTERS TO JOHN. 27. I BELIEVE THOSE PEOPLE TO BE INTELLIGENT. 28. I PROMISED HIM THAT HE COULD COME. 29. THEY GAVE THE BOY A BOOK. 30. THEY GAVE HIM A BOOK. 31. THE 48-CHANNEL MODULE CAN HAVE ONLY TWO DI-GROUPS BUT CAN HAVE UP TO FOUR POWER UNITS IF BOTH DI-GROUPS ARE EQUIPPED WITH ECHO CANCELERS. 32. I TOLD HIM YESTERDAY TO CLEAN HIS ROOM. 33. MOVE THE POWER OPTION JUMPER PLUG SO THAT IT IS ADJACENT TO DI-GROUP ONE ON PRINTED WIRING BOARD. 34. I WANT A LOT MORE COOKIES. 35. THE MINUS-SIGN PRONUNCIATION SWITCH IS IN THE MIDDLE. 36. HE ASKED THE CHILDREN TO FINISH THE JOB. 37. HE ARGUED THAT IT WAS IMPOSSIBLE. 38. IS A MAN AT THE DOOR. 39. A DETAILED DESCRIPTION OF THE USE OF PHONEMES IS PROVIDED IN CHAPTER 7. 1,1 Fidditch failed here on the relative clause with a PP left edge. 153 0 tO ,g °~ a') 2.-. t1"1 r~ i::a., • v,,,~ ,.-1 0 it) t~ < o,..~ g.r., 154 "r" [.-. O o o u, ,.A ..v ILl Z O ..r. i- f.. ..A [.- JA °,,,d o o O o ei o,,,~ 155
1986
22
Morph~lo~leal Decomposition and 5tress Assignment for Speech Synthesis Kenneth Church Bell Laboratories 600 Mountain Ave. Murray Hill, N.J. research !alice !kwc [email protected] 1. Background A speech synthesizer is a machine that inputs a stream of text and outputs a speech signal. This paper will discuss a small piece of how words are converted to phonemes. Text 1 Intonation Phrases 1 WORDS ! PHONEMES ! Lpe Dyads + Prosodics ! Speech Typically words are converted to phonemes in one of two ways: either by looking the words up in a dictionary (with possibly some limited morphological analysis), or by sounding the words out from their spelling using basic principles. • Dictionary Lookup • Letter to Sound Both appt~oaches have their advantages and disadvantages; dictionary lookup fails for unknown words (e.g., proper nouns) and letter to sound rules fail for irregular words, which are all too common in English. Most speech synthesizers adopt a hybrid strategy, using the dictionary when possible and turning to letter to sound rules for the rest. I discussed letter to sound rules at the last meeting of the ACL [Church]; this paper will report on some new dictionary lookup approaches, with an emphasis on morphology. Morphological decomposition is used to reduce the size of the dictionary and to increase coverage. Instead of storing all possible words, the system can store just a lexicon of morphemes and save a factor of 10 [Jon Allen (personal communication)] in storage. Now when the system is given a word and asked to determine is pronunciation, the system decomposes the word into known morphemes, looks up the pronunciation of each of the pieces and combines the results. 2. MITalk Decomp The best known morphological decomposition system is the Decomp module in the MITalk sysnthesizer [Allen et. al.]. This system attempted to parse an input word such as formally into morphemes: form, -al and -ly. It was assumed that morphemes are concatenated together (like "beads on a string") according to the finite state grammar shown below: The types of morphemes were: 1. 2. 3. Prefixes (pref): UNtie, PERmit, REduce Suffixes a. Derivational (derv): laxiTY, existENCE, softNESS, kingDOM b. Inflectional (infl): boatiNG, toastED, coatS, roanS" Roots a. Free (root): stay, squeeze, large b. Absolute (absl): the, than, but c. Left-Bound (lbrt): rePEL, conCEIVE d. Right-Bound (rbrt): CRIMINal, TOLERance e. Strong (root): women, rang Costs were placed on the arcs to alleviate overgeneration. Note that the grammar produces quite a number of spurious analyses. For example, not only would formally be analyzed as form-al-ly but it would also be analyzed as form-ally and for-mal-ly. The cost mechanism blocks these spurious analyses by assigning compounding a higher cost than suffixation and therefore favoring the desired analysis. Although the cost mechanism handles a large number of cases, it would be better to aim toward a tighter grammar of morphology which did not overgenerate so badly. 156 State Arc Cost word-final: cat infl word-final 64 cat derv right-sida-a 35 cat root left-side-a 101 cat lbrt middle 1091 cat absl word-initial 1221 right-side-a: cat derv right-side-a 35 cat infl word-final 35 cat rbrt left-side-a 66 cat root left-side-a 101 cat lbrt middle 1091 right-side-b: cat derv right-side-a 963 cat lbrt middle 2019 cat infl word-final 992 cat root left-side-a 1029 cat rbrt left-side-a 66 middle: left-side-a: word-initial: left-side-b: cat pref left-side-a 34 cat root left-side-a 133 cat derv right-side-b 67 cat hyph word-final 1024 cat infl word-final 1056 cat lbrt middle 1155 cat pref left-side-b 34 cat hyph word-final 1024 cat pref left-side-b 34 cat derv right-side-a 1027 cat lbrt middle 2083 cat root left-side-a 1093 cat hyph word-final 1024 cat infl word-final 1056 The MITalk Decomp program performed its task quite well; it could analyze 95% of running text [Allen (personal communication) ]. In order to achieve this level of performance, the authors of Decomp made a conscious decision not to deal with stress alternations (festive I festivity), vowel shift and tensing (divine / divinity), and other phonological rules associated with latinate morphology. Basically, there was only one rule for combining the pronunciations of morphological pieces: simple concatenation with a few simple rules to account for spelling alternations at the juncture: • Silent e deletes before a vocalic suffix: observe + ance "-'* observance • Consonant doubles before a vocalic suttix: red + est -" reddest • y -" i before a suffix: glory + ous ~ glorious • y deletes before a suffix starting with i: harmony + ize harmonize All affixes were assumed to be stress neutral. Words like festivity and divinity which require a richer understanding of the interaction of morphology and phonology were entered into the lexicon as exceptions. The decision not to handle more complicated morphological and phonological rules was based on the belief that it is hard to do an adequate job and that it wasn't necessary to do so because the rules are not very productive and hence it is possible (and practical) to list all of the derived forms in the lexicon. I'd like to believe that morphology and phonology have progressed enough over the past ten years that this argument does not have as much force as it did. Nevertheless, I have to admit that the payoff may be marginal, especially if measured in short term savings in the size of the lexicon and memory costs. The real value in the enterprise is more long term; I am betting that pushing the theoretical linguistic understanding with a demanding application such as speech synthesis will uncover some new insights. 3. Types of Morphological Combination It has long been recognized that "stress-shifting" morphology (e.g., divin+ity) differs in quite a number of respects from "stress neutral" morphology (e.g., divine#ness). It is a well- established convention to mark the "stress-shifting" morpheme boundary with a "+" symbol and to mark the "stress-neutral" boundary with a "#" symbol. (Scare quotes are placed around "stress-shifting" and "stress-neutral" because these terms are probably not quite right.) This paper will also use the terms Level 1 and Level 2 to refer to the two types of morphological combination, respectively. This terminology is taken from the literature on Level Ordered Morphology and Phonology (e.g., [Mohanan]) which argues that "+" boundary (level 1) morphology is ordered before "#" boundary (level 2) morphology and that this ordering dependency has important theoretical implications. It is worthwhile to review some of the well-known differences between "+" boundaries and "#" boundaries. Informally "+" morphemes such as in +, ad +, ab +, +al, +ity are (generally) derived from Latin whereas "#" morphemes such as #ness, #1y come from Greek and German. This historical trend is only a rough correlation and has numerious counter-examples (e.g., the German suffix -ist behaves like "'+"). The program uses the following set of prefixes and suffixes: • Level 1 "+" Prefixes: a, ab, ac, ad, af, ag, al, am, an, ap, at, as, at, bi, col, corn, con, cor, de, dif, dis, e, ec, ef, eg, el, em, en, er, es, ex, ira, in, ir, is, ob, oc, of, per, pre, pro, re, suf, sup, sur, sus, trans • Level 1 "+" Suffixes: ability, able, aceous, acious, acity, acy, age, al, ality, ament, an, ance, ancy, ant, ar, arity, ary, ate, ation, ational, ative, ator, atorial, atory, ature, bile, bility, ble, bly, e, ea, ean, ear, edge, ee, ence, ency, ent, ential, eous, ia, iac, ial, ian, iance, iant, iary, iate, iative, ibility, ible, ic, ical, ican, icate, ication, icative, icatory, ician, icity, icize, ide, ident, ience, iency, ient, ificate, ification, ificative, if y, ion, ional, ionary, ious, isation, ish, ist, istic, itarian, ite, ity, ium, ival, ive, ivity, ization, ize, le, ment, mental, mentary, on, or, ory, osity, ous, ular, ularity, ure, ute, utive, y • Level 2 "#" Prefixes: anti, co, de, for, mal, non, pre, sub, supra, tri, ultra, un 157 • Level 2 "#" Suffixes: able, bee, berry, blast, bodies, body, copy, culture, fish, ful, fulling, head, herd, hood, ism, ist, ire, land, less, line, ly, man, ment, mental, mentarian, most, ness, phile, phyte, ship, shire, some, tree, type, ward, way, wise There is also a well-known precedence relation between + and #. With very few exceptions, # morphemes nest outside of + morphemes. Thus, we have non # [in + moral] but not *in + [non # moral]. The precedence relation yields some subtle (but Jcorrect) predictions. Observe that -able can be a level 1 affix in some cases (e.g., cbmparable) and a level 2 affix in others (e.g., emplbyable). Notice the contrast between INcomparable and .UNexmployable; the + marked comparable takes the + marked prefix in + whereas, in contrast, the # marked employable takes the # marked prefix un#. This same contrast is brought out by the famous pair: indivisible I undividable. (This argument is no longer considered to be as convincining as it once was because of so-called bracketting paradoxes which will be discussed shortly.) Word formation rules are also sensitive to the difference between + and #. Note that + morphemes can attach to bound morphemes (e.g., crimin + al), but # morphemes cannot (e.g., *crimin #ness, *crimin # ly, *crimin # hood). In addition, # morphemes attach more productively than + morphemes. "It is clear that #ness attaches more productively to bases of the form Xous than does +ity: fabulousness is much "better" than fabulosity, and similarly for other pairs (dubiousness I dubiety, dubiosity). There are even cases where the +ity derivative is not merely worse, but impossible acrimonious I *acrinoniosity, euphonious I *euphonosity, famous I *famosity. There is also the simple list test, which is still a good indicator. Walker (1936) lists fewer +ity derivatives than #ness derivatives of words of the form Xous." [Aronoff, pp. 37-38]. Aronoff continues to point out that the semantics of # boundaries tend to be more predictable and compositional than + boundaries. The meaning of callousness, for example, is more predictable from the meanings of callous and ness than the meanings of variety, notoriety and curiosity are from the meanings of their parts. The following list summarizes some of the differences between + and #: • + morphemes are (often) historically correlated with Latin; # with German and Greek • + morphemes feed certain phonological rules (stress assignment, vowel shift); # do not. • + morphemes take precedence over # • + morphemes can attach to bound morphemes; # cannot • + morphemes are less productive than # • + morphemes have less predictable semantics than # The remainder of the paper will be divided into two sections, the first will be concerned with level 1 morphology and the second with level 2 morphology and compounding. Level 1 morphology has been studied more heavily in the lingusitics literature; level 2 is perhaps more important for practical applications, at least in the short term. 4. Morphological Decomposition of Level I Affixes A number of the differences between + and # ought to be relevant in decomposing level 1 affixes and reducing the posibility of spurious derivations. Consider how the first difference mentioned above, historical correlation, could be used to improve a decomposition program. It is very easy, for example, for a decomposition program to decide erroneously that acclamation is derived from clam, meaning roughly the result of having been clammed up. If the program could somehow split the Latinate and non-Latinate vocabularies, then the program could know that -ation cannot be attached to clam because clam is not Latinate. The program accomplishes this by maintaining a short list of words marked with an ad hoe feature [-Latinate]. The program might perform even better if the Latinate vocabulary were split still further. Consider, for example, the split between words ending with -ent and those ending with -ant. The first class are likely to have variants ending with -ence and -ency and the second are likely to have variants ending with -ance and -ancy. It seems extremely implausible for an -ent word such as president to take an -ant suffix: *presidant, *presidance, *presidancy. Thus, it would be desirable to partition the Latinate vocabulary into quite a number of subsets, each with different possibilities for suffixation. But how do we do this without assigning ad hoc features such as [+Latinate], [+ent], [+ant], [+Declension 1], [+Declension 2], etc.? Not only is the feature approach ad hoc, but it also missing an important asymmetry. Note that most words ending with -ency (e.g., presidency) are derived from words ending with -ent (e.g., president), and crucially not the other way around. The intuition that the relation "derived from" is asymmetric has some distributional support: notice that the percentage of words ending in -ency which are morphologically related to words ending in -ent is much larger than the percentage of words ending in -ent which are related to words ending in -ency. (The program estimates these percentages to be 73% (36/49) and 5% (36/710), respectively, using a procedure described below.) This asymmetry is problematic for a concatenation model like MITalk's Decomp, which would place presidency and president on equal footing, deriving both from preside. Aronoff-style [Aronoff] truncation rules provide an attractive mechanism for accounting for the asymmetry. Recall that Aronoff proposed that nominee be derived from nominate by truncating the -ate suffix and attaching -ee in a single step. These truncation rules were necessary for him so that he could maintain his Word Based Hypothesis. The Word Based Hypothesis claims that words are formed from other words (possibly via truncation) and not from bound morphemes. Thus, in Aronoff's theory, there is no bound morpheme nomin-; there are only words (e,g., nominate and nominee). The generalizations that would be attributed to nomin- in other 158 theories are captured in Aronoff's system by his truncation rules, The program uses truncation rules to capture the symmetry in the 'derived from' relation by permitting -ent to be truncated before -ency, but not the other way around. Thus, presidency is derived from president - -ent + -ency, and president is not derived from presidency because does not truncate -ency before -ent. Truncation rules are subject to a number of constraints. In particular, truncation is only found at level 1; truncation cannot apply at level 2 because, as mentioned above, level 2 affixes attach to words, not bound (- truncated) morphemes. How does the program decide which suffixes can be truncated and when? Let me introduce the notation -ency > -ent to mean (roughly) that words ending with -ency are likely to be derived from words ending with -ent. The precise status of the '>' relation should be to be explored more fully. In some cases, the relation is a necessary condition; if presidency is derived from an English word then it must be derived from president. In other cases, the relationship expresses a possibility but not a necessity. For example, words ending in -ation may be related to words ending in -ate, but not necessarily. Marchand describes the relation as follows: "The English vocabulary has been greatly enriched by borrowings, chiefly from Latin and French. In course of time, many related words which had come in as separate loans developed a derivational relation to each other, giving rise to derivative alternations. Such derivative alternations fall into three main groups. Group A is represented by the pairs 1) -acy / 2) -ate (as piracy ~ pirate), 1) -ancy, -ency / 2) -ant, ent (as militancy ~ militant, decency ~ decent), 1) -ization / 2) -ize (as civilization ~ civilize), 1) -ification I 2) -ify (as identification ~ identify), 1) -ability / 2) -able (as respectibility ~ respectible), 1) -ibility /2) -ible as (convertibility ~ convertible), 1) -ician / 2) -it(s) (as statistician ~ statistics), 1) -icity / 2) -ic (as catholicity catholic), 1) -inity / 2) -ine (salinity ~ saline). If 1) is a derivation from an English word, the only possible word is 2), ie., if piracy is a derivative from an English word, only pirate is possible. The statement does not imply that for every 1) there must be a 2). 1) may be a loan, or it may be formed on a Latin basis without any regard to the existence of an English word at all (enormity, for instance, is so coined). Nor does the derivational principle involve the existence of a 1) for every 2) (many words in -able or -ine are not matched by words in -ability resp. -inity). Group B is represented by the pairs 1) -ation / 2) -ate (as creation ~ create), 1) -(e)ry / 2) -er (as carpentry carpenter), 1) -cress / 2) -erer (as murderess murderer), 1) -ious / 2) -ion (as ambitious ~ ambition, 1) -atious / 2) -ation (as vexatious ~ vexation). If 1) is a derivative from another English word, the derivational pattern 1) from 2) is possible, but not necessary. A derivative in -ation such as reforestation is connected with reforest, a derivative such as swannery is connected with swan, archeress is connected with archer, robustious is extended from robust (but otherwise an adj in -tious derived from a sb points to the sb ending in -tion, i.e. we have really type A). Group C is nothing but a variant of A and concerns adjs in -atious as flirtatious. Originally deriving from sbs in -ation, the type is now equally connected with the unextended radical, i.e. flirt (the older derivation ostentatious 1658 has not entered this latter derivational connection)." [Marchand, pp. 165-166] For pragmatic purposes, the program assumes that there is only one '>' relation, not three as Marchand suggests, and that the relation can be estimated statistically as follows: Probability (suffix I > suffix 2)- number of words ending with both suffiX l and suffix2 number of words ending with suffix l The program estimates, for example, that -ency > -ent with a probability of 73% (36/49) and that -ent > -ency with a probability of 5% (36/710). The 36 words ending in ency which have a variant ending in -ent are: incumbency, complacency, indecency, excrescency, residency, presidency ascendency, dependency, independency, superintendency, despondency, exigency contingency, emergency, detergency, insurgency, deficiency, efficiency sufficiency, proficiency, expediency, clemency, permanency, transparency vicegerency, belligerency, currency, competency, prepotency, consistency inconsistency, frequency, delinquency, constituency, solvency and fervency. The estimate should be almost 100%; the program believes that decency, cadency, tendency, ambitendency, pudency, agency, regency, urgency, counterinsurgency, valency, patency, potency, and fluency are not derived from -ent. Most of the errors can be attributed to a heuristic which excludes short stems (e.g., ag-) on the grounds that these stems are often spurious. These errors could be fixed by ammending the heuristic to check a 'winners list' of one, two and three letter stems. Some of the other errors are due to accidental gaps in the dictionary. The results of this statistical estimation are shown in the figure below (where -0 denotes the null suffix): -ability -able -aceous -acity -acy -age -al -ality -ament -an -ance -ancy -able (43%),-ate (29%) -0 (24%),-ation (18%),-ate (17%),-e (14%),-al (6%), -y (3%),-ion (2%), -ity (2%), -ous (2%),-ent (1%), -ive (1%) -0 (19%), -e (7%),-ate (7%),-ation (4%), -y (4%), -ous (4%),-al (3%),-ary (3%),-ic (3%) -acious (38%) -ate (42%),-ation (18%),-al (13%),-e (8%) -0 (51%),-y (13%),-e (12%),-al (5%),-ate (4%), -ation (4%),-able (4%),-on (4%),-ion (3%),-le (3%), -ic (3%),-ar (2%),-or (2%),-ial (2%) -0 (17%),-e (7%), -ic (2%), -y (2%),-on (1%), -le (1%) -al (76%),-0 (19%),-ate (13%),-e (9%),-ation (7%), -ary (5%),-ous (5%),-able (4%),-ative (4%) -0 (38%),-ate (29%) -0 (6%),-e (2%),-al (2%),-ous (1%), -y (1%),-on (1%), -ate (1%), -ation (1%) -ant (30%),-0 (26%),-e (15%),-ate (10%),-able (9%), -ation (9%),-or (7%),-al (4%),-ous (4%),-ion (4%), -ative (3%),-ive (3%),-y (3%) -ant (40%),-0 (19%),-ation (12%) 159 -ant -ar -arity -ary -ate -ation -ational -ative '-ator -atorial -atory -ature -bility -ble -bly -e -ee -ence -ency -ent -ential -eous -ia -iac -ial -ian -iant -iary -iate -iative -ibility -ible -ic -ical -icate -ate (27%),-ation (21%),-0 (21%),-e (11%),-able -ication (9%), -y (5%),-al (5%),-ous (5%),-ion (4%), -ent -icative (3%),-ity (3%),-or (3%),-ive (2%),-an (1%),-ar -icatory (1%),-ic (1%),-ize (1%),-on (1%) -ician -ate (13%),-e (9%),-ation (7%), -0 (6%), -ous (2%),-y -icity (2%), -able (1%),-al (1%), -ite (1%) -ar (63%),-ate (26%),-ation (22%),-0 (13%) -icize -0 (25%), -al (13%),-ate (10%),-e (8%),-ation (8%), -ar (6%), -ous (4%),-y (4%),-able (3%),-ion (3%),-ic -ide (2%),-ity (2%),-ize (2%),-ant (2%),-or (2%) -0 (13%),-e (9%), -al (8%), -ic (4%),-y (3%), -on (1%),-le (1%),-ion (0%) -ience -ate (42%),-e (21%),-0 (18%),-al (9%),-y (3%),-ous -iency (3%),-ion (1%), -ic (1%),-on (1%) -ient -ation (40%),-e (25%) -ification -ation (56%),-ate (42%), -e (19%), -0 (17%), -able (17%),-ant (12%),-al (9%),-y (5%),-ity (4%),-ous -ify (3%),-ance (3%) -ate (61%),-ation (48%),-ant (18%), -ative (18%), -able (18%),-e (15%),-al (9%),-0 (7%),-ar (6%),-ity (5%),-ous (4%),-ary (4%),-on (4%) -ation (37%),-ator (26%),-atory (26%) -ation (63%), -ate (46%),-e (21%), -ative (20%), -ator (16%),-able (15%),-0 (13%),-ant (11%),-al (7%),-ar (4%) -ate (26%),-0 (21%),-ation (18%) -ion -ional -ionary -ious -isation -ish -ist -ble (62%),-on (14%) -on (5%),-0 (3%),-le (1%) -ble (73%) -0 (4%) -istic -0 (28%),-e (13%),-or (11%),-y (6%),-ation (6%), -ment (5%),-ate (5%),-ant (3%), -al (3%),-ion (3%), -itarian -able (3%) -ite -ent (54%),-e (18%),-0 (15%),-ment (3%) -ent (73%),-ence (24%),-e (14%),-0 (12%) -0 (6%),-e (6%),-y (1%),-ate (1%),-al (1%),-ation -ity (1%) -ence (59%),-ent (59%),-0 (26%),-e (20%) -ium -e (5%),-y (4%),-0 (3%), -ic (3%), -ous (3%),-ate (3%),-on (2%) -ic (14%),-0 (7%), -y (7%),-e (4%),-ous (2%),-al -ival (1%),-ate (1%) -ire -ia (44%),-ic (19%) -0 (26%),-y (15%),-e (5%),-ate (3%),-al (2%),-ic -ivity (2%),-ize (2%) -0 (23%),-y (14%),-ic (7%),-al (6%),-e (4%),-ize -ization (3%),-ia (3%),-ity (3%),-ium (3%) -ize -iate (27%) -ial (25%),-0 (22%),-e (22%) -ial (13%),-e (9%),-0 (7%),-ate (6%),-ium (6%),-ia (5%),-ious (5%) -le -iate (70%) -ment -ible (73%),-ive (45%) -ion (25%),-ive (22%),-0 (20%),-e (12%),-or (10%), -mental -ent (7%),-able (5%),-ory (5%),-enee (4%),-al (4%), -y (4%) -e (18%),-y (14%),-0 (12%) -y (55%), -ic (11%),-0 (8%), -ize (8%),-e (6%),-ist (6%),-al (2%),-ate (2%) -ication (26%),-ic (17%),-icity (15%),-e (14%),-y (11%),-0 (7%),-ical (7%) -y (66%),-ic (14%),-e (9%) -ieation (50%),-icate (38%),-y (38%) -ication (50%), -y (43%), -icate (36%) -ic (61%),-ical (32%),-0 (16%), -e (13%),-y (13%) -ie (63%),-e (18%),-0 (16%),-y (12%),-ieal (10%), -ize (8%),-al (7%),-ieation (7%) -ie (71%) -ate (8%),-ic (8%),-0 (7%), -ite (6%),-e (4%), -on (3%), -ous (3%),-al (3%), -ize (3%),-age (2%),-ium (2%) -ient (40%) -ient (100%) -e (11%),-0 (10%) -ify (71%),-0 (22%),-e (18%),-ity (16%),-y (16%),-ic (11%) -0 (25%),-e (15%),-ic (15%),-y (15%),-ity (13%),-al (11%),-ate (9%),-ion (7%),-ite (6%),-ize (5%),-or (5%), -ar (4%), -ary (4%),-ical (4%) -e (31%),-0 (15%),-ic (1%),-y (1%),-al (1%) -ion (57%),-ire (21%),-0 (18%),-e (18%),-or (11%) -ion (87%),-e (30%),-0 (26%),-ive (26%) -y (15%),-ity (13%),-ion (10%),-0 (9%),-e (9%),-ial (6%), -ium (5%), -ie (4%), -ate (3%), -ive (3%), -ist (2%) -ization (93%),-ize (70%),-0 (53%),-ity (33%),-ist (27%), -ic (20%),-e (17%) -0 (27%), -e (11%),-y (7%),-le (2%),-ic (2%) -0 (40%),-ie (19%),-ize (18%),-y (18%),-e (14%),-al (6%),-ity (5%),-ation (3%),-ate (2%),-able (1%),-ion (1%) -ist (46%),-ize (29%),-0 (27%),-e (17%),-ic (15%), -ity (13%),-y (13%),-al (10%) -ity (57%), -ize (43%),-0 (36%),-e (36%) -0 (13%),-ic (11%),-e (6%),-ate (6%),-ous (6%),-y (2%),-ia (2%),-on (2%),-al (1%),-able (1%),-ity (1%),-ation (1%),-ion (1%),-or (1%) -0 (37%),-e (24%), -ous (6%), -ate (5%),-al (4%), -ation (3%), -y (2%), -ion (1%),-ic (1%) -ic (11%),-0 (8%),-ial (6%),-y (6%),-ia (6%),-e (6%), -ite (5%),-ate (4%),-ous (4%),-al (2%),-on (2%),-ion (2%), -ize (2%),-ist (2%) -ire (47%) -ion (59%),-e (26%),-0 (22%),-al (1%),-y (1%), -ation (1%) -ive (66%),-ion (61%),-0 (39%),-or (32%),-anee (14%),-e (14%),-ible (11%) -ize (75%),-0 (59%),-ity (31%),-ist (25%),-ic (22%) -0 (47%),-ie (17%),-ity (17%),-y (14%),-e (12%), -ous (6%),-ate (4%),-al (4%),-ite (2%),-ation (1%), -ia (1%) -0 (11%), -y (3%), -e (3%),-on (2%),-ic (1%) -0 (63%),-able (6%),-e (4%), -ation (4%), -or (3%), -ant (2%),-ate (2%),-ble (2%) -ment (77%),-0 (20%) 160 -mentary -on -or -ory -osity -OUS -ular -ularity -ure -ute -utive -y -ment (56%) -0 (4%), -e (2%),-ic (2%), -y (1%) -ion (30%),-e (27%),-0 (22%),-ive (16%),-ation (3%), -able (3%), -y (2%), -al (2%),-ate (2%), -ent (1%),-le (1%) -ion (56%),-e (34%),-ive (21%),-or (20%),-0 (I 1%) -ous (65%),-0 (15%),-al (12%),-ate (11%),-e (11%) -0 (13%), -ic (7%), -ate (6%), -e (6%), -y (4%), -al (4%),-on (2%) -le (31%),-0 (4%),-e (4%),-ate (4%) -ular (67%),-le (28%) -0 (21%),-e (15%),-ion (11%),-or (8%),-ire (4%),-al (2%) -e (8%) -ute (67%) -0 (19%),-e (6%) The decomposition program uses the table above to decide which suffixes can be truncated and when. Consider the word presidency. The program notices that this word ends in -ency so it looks in the table and discovers that -ency alternates with -ent (73%), -ence (24%), -e (14%) and -0 (12%). The program tries to replace the -ency with each of these sequentially until it finds a word in the dictionary. In this case, it will succeed on the first try when it replaces -ency with -ent and finds that the result president is a word in the dictionary. Level 1 prefixes are processed through an analogous procedure, so that effect, for example, is derived from defect by truncating the ef- prefix and adding the prefix de-. The truncation mechanism is not generally employed by most authors for prefixing, and it may be a mistake to do so, but I used it anyways, mostly because it was available and filled a practical need. The resulting decomposition program has been used to construct a forest of related words as illustrated below: ( 38 port ( aport ) (comport (cosportmtnt)) (deport (depoEtatlon) (doporCee) ( doper tment ) ) ( disport ) (export (exportation) (reexport)) (import (important (importance)) (importation) (relmport)) (portable) (portage) (portal) (portative) (portent (portentous) ) ( portion ( apportion (apportionment) (reapportlon (reapportionment))) (proportlon (disproportlon ( disproportionate (dlspzoportionation) ) (pzoportional) (proportionate) ) ) ( report ( reportage ) ) (transport (transportation)) ) (36 infect (affect (affectation) (a£fection (affectionate)) (effective (affeotiviCy)) (disaffect)) (confeet (confection) (confec~ienary)) (defect (defection) (defeotlve) (effect (effecClve (ineffectlve)))) (disinfect (disinfectant)) (infection) ( infectious ) ( infective ) (refect (perfect (imperfect (imperfection) ( imper fective ) ) (perfection (perfectionist)) (perfective (perfectible)) ) (prefect (prefecture)) (refection) (refectory (prefectorial)) ) ) The forest was constructed by applying the decomposition procedure to every word in the dictionary and then indexing the results to show which forms were derived from which stems. Thus 38 words were found to be related to the stem port and 36 words were found to be related to infect. These results seems extremely promising; most of the relations appear to agree very closely with intuition. Now that we have a fairly accurate method of decomposing words at level 1, how can this be put to practical use'?. For assigning stress, it would be useful to know the weight of the syllables in the stem. This is particularly necessary before so- called weak retraction suffixes (e.g., -ent, -ant, -ence, -able, ance, al, ous, ary). General principles of stress retraction (e.g., [Liberman and Prince]), predict strong retractors (e.g., -ate, -ation) always back the stress up regardless of syllable weight (degrhde I d~gradation), whereas weak retractors do so only if the preceding syllable is light (refir / rkferent with a light syllable before -ent, as opposed to (cohkre /cohkrent with a heavy syllable before -ent). Given syllable weight, it is relatively well-understood how to assign stress. A large number of phonological studies (e.g., [Chomsky and Halle], [Liberman and Prince], [Hayes]) outline a deterministic procedure for assigning stress from the weight representation and the number of extrametrical syllables (1 for nouns, 0 for verbs). A version of this procedure was implemented by Richard Sproat last summer, and was discussed at the last ACL meeting [Church]. It it generally believed that syllable weight is derivable from underlying vowel length and the number of consonants, but if one is trying to assign stress from the spelling, it can be difficult to know the vowel length and the number of consonants. The fact that inhence has a heavy penultimate syllable and that ~nference has a light penultimate syllable is extremely difficult to determine from the spelling. It would be considerably easier if syllable weight (or some correlate thereof such as vowel length) were marked in a lexicon of stems, so that the program could determine syllable weight by decomposing a word into its peices, look them up in a morpheme lexicon, and then re-combine the results appropriately. Not only is it convenient for practical application to assume that stems are marked in the lexicon for syllable weight, but it may be necessary for linguistic reasons as well. Consider the stress alternation confide I confidence. This alternation is problematic because the i in confide seems to be underlyingly long whereas the i in confidence seems to be underlyingly short, and yet, the 161 two stems ought to share the same underlying form since the two words are morphologically related to one another. The solution to the confidence puzzle, I believe, is to say that the stem -fide is marked in the lexicon as underlyingly light at least with respect to stress retraction (and to account for the tense vowel in confide in some other way [Church (forthcoming)]). The table below is presented as evidence that the confidence alternation is determined, at least in part, by some sort of lexical marking on stems. Note, for example, that -fer, -cel, -side, and -fide words display the confidence alternation, but -here, -pel, and -pose words do not. alternation no alternation refer reference confer conference infer inference defer deference excel excellent excellence excellency reside resident residency preside president presidency confide confident confidence confidency adhere adherent adherence adhesive cohere coherent coherence cohesive inhere inherent inherence inhesion expel expellent expellant repel repellent propel propellent propellant expose exposal exposure expository dispose disposal disposure dispository propose proposal compose composure Assume the lexicon divides stems into at least two classes: • Retraction Class I Stems (light): -fer, -cel, -side, -fide, -main, -vail, -note, -cede, -pete, -pair, -pare • Retraction Class II Stems (heavy): -here, -pel, -pose, -hale, -pale, -grade, -vade, -flame, -suade, -place, -plore, -void, -clude, -prove,-sume, -fuse, -duce where class I stems show stress alternations before weak retracting suffixes and class II stems do not. This concludes what I wanted to say about level 1 decomposition. In summary, this section presented Aronoff-style truncation rules as an alternative to MITalk-style concatenation rules. Truncation rules hav.e the advantage that they preserve the asymmetry in the 'derived from' relation, and that they correctly partition the lexicon into classes such as [+ent] and [+ant] without introducing unnecessary ad hoc features such as [+ent] and [+ant]. Some results of the new decomposition procedure were presented, and they seem to agree very closely with intuition. It was suggested that the decomposition procedure could be used in stress assignment, by decomposing words into morphemes, look up the syllable weight of the pieces in a morpheme lexicon, and then recombine the results appropriately. This last suggestion has not yet been fully implemented. 5. Level 2 and Compounding Most of the linguistic literature deals with level 1 where we find extremely interesting stress alternations and vowel shifts and so forth. Generally speaking, the phonology of level 2 and compounding is believed to be relatively fairly straightforward. Something like the simple concatenation model in decomp is not a bad first approximation. In fact, I believe the stress of level 2 and compounding is more interesting than has generally been thought. In particular, I am beginning to believe that level 2 affixes are not stress neutral at all, but rather they stress as if they were parts of compounds. Note that under-, anti- and super- follow the general compound pattern where stress is assigned the to the left member in nouns and to the right in verbs and adjectives. Noun Verb Adjective tlnderdog underg6 under~.ge ~.ntifreeze antis6cial stlpermarket superimp6se supers6nic 6. Are Level 2 Affixes Really Stress Neutral? It might be possible to extend this position to its logical extreme and say that all level 2 affixes stress like compounds, and thus completely do away with the concept of stress neutral affixes. • Compound Theory: (All) Level 2 affixes are stressed just like compounds; they receive main stress on the left in nouns and main stress on the right in verbs and adjectives. • Stress Neutral Theory: (At least some) Level 2 affixes are stress neutral; they are simply concatenated onto the stem (a 1~. MITalk's Decomp). The compound theory has much to recommend it. Indeed most level 2 prefixes are like under-, anti- and super- and show the compound stress pattern (stress on the left when nominal and on the right when verbal/adjectival). These prefixes cannot be accounted for easily under the stress neutral theory. The main support for the stress neutral theory seems to come from prefixes like un- which (almost) never take the main stress. However, un- can also be accounted for under the compound theory by noting that un- forms adjectives and verbs, and therefore main stress would fall on the right. Admittedly, there are a number of nominal compounds like pro-life and anti-abortion which take right stress, presumably because the semantics of the left member takes on a semi- adjectival status. Notice, for example, that the word antimatter 162 has two stress patterns, one with main stress on the left and one with main stress on the right, just like well-known compound blackboard. With left stress, the compound takes non- compositional semantics and with right stress the compound has a more compositional meaning. These facts suggest that the compound theory can be maintained to acocunt for cases like pro-life, but only if the compound stress rules are refined take the semantic facts into account. Level 2 suffixes provide additional support for the compound theory. Consider suffixes like ment, hood, ship and ness which appear to support the the stress neutral theory because they never receive main stress. But, they can also be accounted for under the compound theory because they form nouns, and therefore the main stress would be expected to fall on the left. Moreover, consider the level 2 adjectival suffixes -istic and -mental. l These suffixes refute the stress neutral theory because they take the main stress, but they are no problem for the compound stress theory which predicts that adjectivial compounds should receive main stress on the right. 7. The Super-Puzzle and Compound Stress In attempting to include prefixes as a subcase of compound stress, I did stumble over a very interesting problem in the theory of compound stress. Consider the contrast between sl~perconductor and shperconductlvity. Although both compounds are nominal, the first takes primary stress on the left member and the second takes stress on the right member. Upon further investigation, it appears than many compounds ending with level 1 suffixes. (e.g., -ity, -ation) take primary stress on the right member. For example, here is a breakdown of compounds ending with the letters ion. Note the strong tendency for primary stress to end up on the right member. ~ • Left-Dominant: intersession, outstation, midsection • Right-Dominant: intercommunion, supervision, anteversion, intercession, supersession, intermission, echolocation, inter- columniation, contravallation, overpopulation, interlunation, intermigration, overcompensation, aftersensation, super- fetation, superelevation, interaction, intersection, contra- distinction, superinduction, superconduction, underproduct- ion, contraposition, superposition, interposition, postposition, interlocution, counterrevolution • Neither: tourbillion, interrogation, foreordination, redintegration forestation, electrodeposition 3 Thus, it appears that compounds ending with a level 1 suffix take right stress. If correct, however, the generalization is a puzzle for the level ordering hypothesis, which assumes that the stress rules of level 1 are opaque to the stress rules of level 2 and compounding. In other words, level ordering suggests a structure like super[conductivity] where level 1 takes precedence over level 2 and compounding, but stress assignment requires a different structure [superconductive]ity where the compound stress rule applies before the level 1 suffix is analyzed. 1. These suffixes cannot be level 1, because they don't force the secondary stress to fall two syllables before the main stress: *dbpartmbntal (cf.. dbgrad[ttion). In this sense, words like superconductivity are very much like the well-known bracketing paradox ungrammaticality, where level ordering suggests one structure un[grammaticality] (un# is a level 2 prefix which must scope outside of +ity with is a level 1 prefix) and syntactic/semantic interpretation (LF) requires another [ungrammatical]ity (un# attaches to adjectives and not to nouns). Note that stress assignment seems to side with the syntactic/semantic arguments in suggesting a left branching structure that violates level ordering. A solution to these bracketting paradoxes becomes apparent when we consider nominal Greek compounds like psychobiology with three or more morphemes. Notice that these compounds systematically take main stress on the middle morpheme. aeroneurosis, aerothermodynamics, astrobiology, astro- geology, astrophotography, autobiography, autohypnosis, autoradiograph autoradiography, biogeography, biophysicist, biotechnology, chromolithograph, chromolithography, chrono- biology, cryobiology, diageotropism, electroanalysis, electro- cardiogram, electrocardiograph, electrodialysis, electro- dynamometer, electroencephalogram, electroencephalograph, electroencephalography, electrophysiology, endoparasite, epi- diascope, geochronology, geomorphology, heterochromatin, heterochromosome, histopathology, hypnoanalysis, magneto- hydrodynamics, metaphysicist, metapsychology, micro- analysis, microbarograph, microbiology, micrometeorology, micropaleontology, microparasite, microphotograph, micro- photography, multivibrator, myocardiograph, neoorthodoxy, neuropathology, neurophysiology, orthohydrogen, otolaryngo- logy, paleoethnobotany, parahydrogen, parapsychology, photochronograph, photoelectrotype, photogeology, photo- lithograph, photolithography, photomicrograph, photo- polymer, phototelegraphy, phototypography, photozinco- graph, photozincography, pneumoencephalogram, pneumo- encephalography, psychoanalyse, psychoanalysis, psycho- analyze, psychobiology, psychoneurosis, psychopathology, psychopharmacology, psychophysiology, radioautograph, radiobiology, radiomicrometer, radiotelegram, radiotele- graph, radiotelegraphy, radiotelemetry, radiotelephone, radiotelephony, semidiameter, semiparasite, spectrohelio- graph, spectrophotometer, stereoisomer, stereoisomerism, telephotography, telespectroscope, telestereoscope, teletype- writer, thermobarograph, thermobarometer, ultramicrometer, ultramicroscope, ultramicroscopy Assume that compounds take stress on the right member when it is branching (bi-morphemic). Thus, psycho[biology] takes main stress on the biology because it is branching. Let me suggest further that this same sort of explanation might carry over to explain the stress in the bracketting paradoxes such as superconductivity and ungrammaticality where I claim that the right piece is 'branching' in order to account for the fact that main stress ends up on the right half. 4 Note that I am 2. None of the left dominant words above end in the suffix +ion. Note, for example, the contrast between lnter'session and inter-ebss+ion. The left dominant case does not end in the su/fix +ion: the right dominant case does. 3. Almost all of these exceptions are due to errors in morphological decomposition algorithm. Tour # billion, inter # rogation, fore # station. and electrode # position are all incorrect analyses. It is highly unusual for the algorithm to make this many mistakes. 163 using the lexical category prominance rule in order to let one bit of information [+branching] pass through the opacity imposed by level ordering. 8. Conclusion Two new ideas in machine morphological decomposition were presented. The discussion of level 1 proposed the application of Aronoff-style truncation rules as an effective means to capture the asymmetry in the 'derived from' relation. Secondly, the discussion of level 2 proposed ideas from the literature on compound stress as an alternative to the stress neutral approach taken in MITalk's Decomp. References Aronoff, M., Word Formation in Generative Grammar, MIT Press, Cambridge, MA., 1976. Allen, J., Carlson, R., Granstrom, B., Hunnicutt, S., Klatt, D., Pisoni, D., Conversion of Unrestricted English Text to Speech, incomplete draft, undergroland press, 1979. Chomsky, N., and Halle, M., The Sound Pattern of English, Harper and Row, 1968. Church, K., Stress Assignment in Letter to Sound Rules for Speech Synthesis, in Proceedings of the Association for Computational Linguistics, 1985. Church, K., The Confidence Puzzle and Underlying Quantity, forthcoming. Hayes, B., A Metrical Theory of Stress Rules, Ph.D. Thesis, MIT, 1980. Liberman, M., and Prince, A., On Stress and Linguistic Rhythm, Linguistic Inquiry 8, pp. 249-336, 1977. Marchand, H., The Categories and Types of Present-Day English Word-Formation, University of Alabama Press, 1969. Mohanan, K., Lexical Phonology, MIT Doctoral Dissertation, available for the Indiana University Linguistics Club, 1982. 4. The problem is to define 'branching' so that it gets the right results. 1 don't want to say that superconductor is branching, because that would incorrectly predict main stress on conductor. I don't know how to define branching to achieve the desired results, though 1 believe that thi~ approach is extremely promising. 164
1986
23
A SENTENCE ANALYSIS METHOD FOR A JAPANESE BOOK READING MACHINE FOR THE BLIND Yutaka Ohyama, Toshikazu Fukushima, Tomoki Shutoh and Masamichi Shutoh C&C Systems Research Laboratories NEC Corporation 1-1, Miyazaki 4-chome, Miyamae-ku, Kawasaki-city, Kanagawa 213, Japan ABSTRACT The following proposal is for a Japanese sentence analysis method to be used in a Japanese book reading machine. This method is designed to allow for several candidates in case of ambiguous characters. Each sentence is analyzed to compose a data structure by defining the relationship between words and phrases. This structure ( named network structure ) involves all possible combinations of syntactically collect phrases. After network structure has been completed, heuristic rules are applied in order to determine the most probable way to arrange the phrases and thus organize the best sentence. All information about each sentence ~ the pronunciation of each word with its accent and the structure of phrases ~ will be used during speech synthesis. Experiment results reveal: 99.1% of all characters were given their correct pronunciation. Using several recognized character candidates is more efficient than only using first ranked characters as the input for sentence analysis. Also this facility increases the efficiency of the book reading machine in that it enables the user to select other ways to organize sentences. I. Introduction English text-to-speech conversion technology has substantially progressed through massive research ( e.g., Allen 1973, 1976, 1986; Klatt 1982, 1986 ). A book reading machine for the blind is a typical use for text-to- speech technology in the welfare field ( Allen 1973 ). According to the Kurzweil Reading Machine Update ( 1985 ), the Machine is in use by thousands of people in over 500 locations worldwide. In the case of Japanese, however, due to the complexities of the language, Japanese text-to-speech conversion technology hasn't progressed as fast as that of English. Recently a Japanese text-to-speech synthesizer has been introduced ( Kabeya et al. 1985 ). However, this synthesizer accepts only Japanese character code strings and doesn't include the character recognition facility. Since 1982, the authors have been engaged in the research and development of a Japanese sentence analysis method to be used in a book reading machine for the blind. The first version of the Japanese book reading machine, which is aimed to exarnine algorithms and its performance, has developed in 1984 ( Tsuji and Asai 1985; Tsukurno and Asai 1985; Fukushima et al. 1985; Mitome and Fushikida 1985, 1986 ). Figure 1 shows the book reading process of the machine. A pocket-size book is first scanned, then each character on the page is detected and recognized. Sentence analysis ( parsing ) is accomplished by using character recognition result. Finally, synthesized speech is generated. The speech can be recorded for future use. The pages will turn automatically. a p?ket-size ',', , ! ~ ~ book Automatic Paging Image Scanning Character Recognition Sentence Parsing Speech Synthesis Speech Recording I Figure I. The Book Reading Machine Outline. 165 The Japanese sentence analysis method that the authors have developed has two functions: One, to choose an appropriate character among several input character candidates when the character recognition result is ambiguous. Two, to convert the written character strings into phonetic symbols. The written character strings are made up Kanji ( Chinese } characters and kana ( Japanese consonant-vowel combination ) characters. These phonetic symbols depict both the pronunciation and accent of each word. The structure of the phrases is also obtained in order to determine the pause positions and intonation. After briefly describing the difficulty of Japanese sentence analysis technology compared to that of English, this paper will outline the Japanese sentence analysis method, as well as experimental results. 2. Comparison of Japanese and English as Input for a Book Reading Machine In this section, the difficulty of Japanese sentence analysis is described by comparing with that of English. 2.1 Conversion from Written Characters to Phonetic Symbols In English, text-to-speech conversion can be achieved by applying general rules. For exceptional words which are outside the rules, an exceptional word dictionary is used. Accentuation can be also achieved by rules and an exceptional dictionary. Roughly speaking, Japanese text-to-speech conversion is similar to that of English. However, in case of Japanese, more diligent analysis is required. Japanese sentences are written by using Kanji characters and kana characters. Thousands of kinds of Kanji characters are generally used in Japanese sentences. And, most of the Kanji characters have several readings ( Figure 2 (a)). On the other hand, the number of kana characters is less than one hundred. Each kana character corresponds to certain monosyllable. Therefore, in the conversion of kana characters, kana-to-phoneme conversion rules seem to be successfully applied. However, in two cases, kana characters l~ and ~', are used as Kaku-Joshi, Japanese preposition which follows a noun to form a noun phrase, then the pronunciation changes ( Figure 2 (b) }. Subsequently the reading of numerical words also changes ( Figure 2 (c)). As described above, the pronunciation of each character in Japanese sentences is determined by a neighbor character which combines to form a word. There are too many exceptions in Japanese to create general rules. Therefore, a large size word dictionary which covers all commonly used words is generally used to analyze Japanese sentences. 2.2 Required Sentence Analysis Level In English sentences, the boundaries between words are indicated by spaces and punctuation marks. This is quite helpful in detecting phrase structure, which is used to determinate pause positions and intonation. On the contrary, Japanese sentences only have punctuation marks. They don't have any spaces which indicate word boundaries, Therefore, more precise analysis is required in order to detect word boundaries at first. The structure of the sentence will be analyzed after the word detection. lq h__i ( day / sun ) N ~ n_.._i-hon ( Japan ) n_~-pon ( Japan ) H ~ nichi-fi ( date and time ) B T kusa.ka ( a Japanese last name ) gap-pi ( date ) H tsuki-hi ( months and days ) ~" H kyo-_u ( today ) kon-nichi ( recent days ) ichi-nichi ( one day ) --[3 ichi-jitsu ( one day ) tsui-tachi ( the 1st day of a month ) -- H futsu-k_a ( the 2nd day of a month / two days ) (a) Kanji Characters h_a-na-w_._a ki-re-i-da ~"~ ~zt}~ ~ h e-ya-_e ha-i-ru (b) Kana Characters --~. ip-pon -" :~ ni-hon -~ ;t: san'b.o_ n (c) Numerical Words Figure 2. ( Flowers are beautiful. ) ( Entering the room. ) ( one [pen, stick,...] ) ( two [pens, sticks,...] ) ( three [pens, sticks,...] ) Examples of Japanese Word. 166 2.3 Character Recognition Accuracy English sentences consist of twenty-six alphabet characters and other characters, such as numbers and punctuations. Because of the fewer number of the English alphabet characters, characters can be recognized accurately. Japanese sentences consist of thousands of Kanji characters, more than one hundred different kana characters ( two kana character sets ~ Hiragana and Katakana are used in Japanese sentences ) and alphanumeric characters. Because of the variety of characters, even when using a well-established character recognition method, the result is sometimes ambiguous. 3. Characteristics of Sentence Analysis Method The Japanese sentence analysis method has the following characteristics. I. The mixed Kanji-kana strings are analyzed both through word extraction and syntactical examination. An internal data structure ( named network structure in this paper ), which defines the relationship of all possible words and phrases, is composed through word extraction and syntactical examination. After network structure has been completed, heuristic rules are applied in order to determine the most probable way to arrange the phrases and thus organize a sentence. 2. When an obtained character recognition result is ambiguous, several candidates per character are accepted. Unsuitable character candidates are eliminated through sentence analysis. 3. Each punctuation mark is used as a delimiter. Sentence analysis of Japanese reads back to front between punctuation marks. For example, the analysis starts from the position of the first punctuation mark and works to the beginning of the sentence. Thus, word dictionaries and their indexes have been organized so they can be used through this sequence. 4. The sentence analysis method is required for short computing time to analyze unrestricted Japanese text. Therefore, it has been designed not to analyze deep sentence structure, such as semantic or pragmatic correlates. 5. By the user's request, the book reading machine can read the same sentence again and again. If the user wants to change the way of reading ( e.g. in the case that there are homographs ), the machine can also crest other ways of reading. In order to achieve this goal, several pages of sentence analysis result is kept while the machine is in use. 4. Outline of Sentence Analysis System As shown in Figure 3, the Japanese sentence analysis system consists of two subsystems and word dictionaries. Two subsystems are named "network structure composition subsystem" and "speech information organization subsystem", respectively. These subsystems work asynchronously. Recognized Characters User'8 Request Network Structure Compoeition Subsystem I Indexes Speech Information Organization Subsystem Network Structure Contents Word Dictionaries ,Speech Information Figure 3. Sentence Analysis System Outline. 167 4.1 Network Structure Composition Subsystem As the input, the network structure composition subsystem receives character recognition results. When the character recognition result is ambiguous, several character candidates appear. During the character recognition, the probability of each character candidate is also obtained. Figure 4 is an example of character recognition result. Figure 4 describes: The first character of the sentence as having three character candidates. The fifth and seventh characters as having two candidates. Except the fifth character, all of the first ranking character candidates are correct. However, the fifth character proves an exception with the second ranking character candidate as the desired character. With the recognized result, the network structure composition subsystem is activated. Figure 5 describes how the recognition result ( shown in Figure 4 ) is analyzed. Through the detection of punctuation marks in the input sentence ( recognition result ), the subsystem determines the region to be analyzed. After one region has been analyzed, the next punctuation mark which determines the next region is detected. In case of Figure 5, for example, whole data will be analyzed at once, because the first punctuation mark is located at the end of the sentence. Characters in the region are analyzed from the detected punctuation to the beginning of the sentence. The analysis is accomplished by both word extraction ;~nd syntactical examination. Words in dictionaries are extracted by using character strings which are obtained by combining character candidates. The type of the characters ( kana, Kanji etc. ) determines which index for the dictionaries will be used. Input Text 3~ % ~J~]~:-~- ~. (Analyze a sentence. ) 1 2 3 4 5 6 7 8 1st Candidate ~ ~ ~ ~ 2nd Candidate ~ ~5 3rd Candidate Figure 4. Character Recognition Result Example. D [] C3 Dependent Word Independent Word Phrase Syntactically Correct Conjugation (anatvze) FZl J Vzl J (a sentenee~., l_~ ~ (a paragraph} (a sentence} (length} (~3 ~ (again) Figure 5. Sentence Analysis Example. 168 After extracting the words, phrases are composed by combining the words. Using syntactical rules ( i.e. conjugation rules ), only syntactically correct phrases are composed. Finally, by using these phrases, network structure is composed. Network structure obtained through the analysis described in Figure 5 is shown in Figure 6. This structure involves the following information. • hierarchical relationship between sentence, phrases and words • syntactical meaning of each word • pointers to the pronunciation and accent information of for each word in dictionaries • pointers between phrases which are used when the user selects other ways of reading Some features of Japanese language are utilized in the network structure composition subsystem. Some examples of them are as follow. 1. In general, a Japanese phrase consists of both an independent word and dependent words. The prefix word and/or the suffix word are sometimes adjoined. The number of dependent words is not so many as compared with independent words. It seems to be efficient to analyze dependent words first. Thus, the analysis is accomplished from the end of the region to the beginning. 2. 3. Independent words mostly include non-kana characters, alternately, dependent words are written in kana characters. Therefore, higher priority is given both to independent words which include a non-kana characters and to dependent words which consist of only kana characters. The number of Kanji characters is far greater than that of kana characters. Therefore, it seems efficient to use a Kanji character as the search key to scan the dictionary indexes. These indexes are designed so that the search key must be a non-kana character in cases where there is one or more non-kana character. 4.2 Speech Information Organization Subsystem With the user's request for speech synthesis, the speech information organization subsystem is activated. This subsystem determines the best sentence ( a combination of phrases ) by examining the phrases in network structure. After organizing the sentence, the information for speech synthesis is then organized. The pronunciation and accent of each word are determined by using the dictionaries. The structure of the sentence is obtained by analyzing the relationship between phrases. In case of numerical words, such as 1,234..56, a special procedure is activated to generate the reading. In case the user requests other ways of reading the sentence, the subsystem chooses other phrases in network structure, thus organizing the speech synthesis information. Sentence Phrases Words / / ' ~ ~ ~: ~'~ ~ ~ffi~__~ ~° ~ ~ 9--"/ I ~ I~, ~ - ~ " f • I~bu',.hoo ,. I t n" t'-- b.'. -I ,.'" .... I ~= In. [ Pronunciation ]u'mi lady. i Accent a'ya Figure 6. Network Structure Example. 169 In order to determine the most probable phrase combination in network structure, heuristic rules axe applied. The rules have been obtained mainly by experiments. Some of them are as follow. [11 Number of Phrases in a Sentence The sentence which contains the least number of phrases will be given the highest priority. i21 Probabilities of Characters The phrase which contains more probable character candidates will be given higher priority. This probability is obtained as the result of character recognition. !3] Written Format of Words Independent words written in kana characters will be given lower priority. Independent words written in one character will be also given lower priority. 14! Syntactical Combination Appearance Frequency The frequently used syntactical combination will be given higher priority. ( e.g. noun-preposition combination ) !51 Selected Phrases The phrase which once has been selected by a user will be given higher priority. In the case of Figure 3, the best way of arranging phrases is determined by applying the heuristic rule [1]. 4.3 Word Dictionaries Dictionaries used in this system are the following. (1) Independent Word Dictionary Nouns, Verbs, Adjectives, Adverbs, Conjunctions etc. 65,850 words (2) Proper Noun Word Dictionary First Names, Last Names, City Names etc. 12,495 words (3) Dependent Word Dictionary Inflection Portions for Verbs and Adjectives. They are used for conjugation. their usage. 560 words (4) Prefix Word Dictionary 153 words (5) Suffix Word Dictionary 725 words Each word stored in these dictionaries has the following information. (a) written mixed Kanji-kana string (first-choice) (b) syntactical meaning (c) pronunciation (d) accent position Items (a) and (b) of all words are gathered to form the following four indexes. * Kana Independent Word Index * Kana Dependent Words and Kana Suffix Word Index * Non-Kana Word Index * Prefix Word Index These indexes are used by the network structure composition subsystem. Items (c) and (d) are used by the speech information organization subsystem. 5. Experimental Results Some experiments have achieved in order to evaluate the sentence analysis method. In this section, these experimental results are described. 5.1 Pronunciation Accuracy The accuracy of pronunciation has been evaluated by counting correctly pronounced characters. In this experiment, character code strings were used as the input data. The following two whole books are analyzed. • Tetsugaku Annai ( Introduction to Philosophy ) by Tetsuzo Tanikawa ( an essay ) • Touzoku Gaisha ( The Thief Company ) by Shin-ichi Hoshi ( a collection of short stories ) As shown in Table I, 99.1% of all characters have been given their correct pronunciation. Table 1. Score for Correct Pronunciation. Total Characters 128,289 (100%) Correct Characters 127,108 (99.1%) 170 The major cases for mispronunciation are as follows. (1) Unregistered words in dictionaries (l-a) uncommon words (l-b) proper nouns (l-c) uncommon written style (2) Pronunciation changes in the case of compound words (3) Homographs (4) Word segmentation ambiguities (5) Syntactically incorrect Japanese usage 5.2 Efficiency as the Postprocessing Roll for Character Recognition The efficiency as the postprocessing roll for character recognition has been evaluated by comparing the characters used for speech synthesis with the character recognition result. Twelve pages of character recognition results ( four pages of three books ) have been analyzed. The books used as the input data are as follow. • Tetsugaku Annai ( Introduction to Philosophy ) by Tetsuzo Tanikawa ( an essay ) • Touzoku Gaisha ( The Thief Company ) by Shin-ichi Hoshi ( a collection of short stories } • Yujo ( The friendship ) by Saneatsu Mushanokouji ( a novel ) Table 2 shows scores for the character recognition result. Table 2. Character Recognition Result. Total Characters 6,793 (100%) Correct Characters 6,757 (99.5%) ( at 1st Ranking ) Correct Characters ( in 1st to 5th Ranking ) 6,7s3 (99.9%) Table 3 shows the score for characters which are' chosen as correct characters by the sentence analysis method, as well as the score for correctly pronounced characters. Table 3. Scores after Sentence Analysis. Total Characters 6,793 (100%) Characters Treated as 6,772 (99.7%) Correct Characters Characters Correctly Pronounced 6,72s (99.0%) As shown in Tables 2 and 3, the score for correct characters obtained after the sentence analysis was 99.7%, while the score for the 1st ranking chaxacters obtained in the character recognition result was 99.5%. This experimental result reveals that the sentence analysis method is effective as a postprocessing roll of character recognition. The state of errors found during the experiment is shown in Table 4. The difference between (b') and (b3) in Table 4 indicates the effectiveness of the sentence analysis method. The score 99.0% in Table 3 indicates the efficiency of the sentence analysis method in the book reading machine. Table 4. State of Errors. << Character Recognition Error >> Ca) 1st Ranking Chars are Incorrect (al) Correct Chars in 2nd-5th (a2) Not among Candidates 36 26 10 << Sentence Analysis Error >> (b) (bl) (b2) (b3) Total Incorrect Char Incorrect Chars among (al) Incorrect Chars among (a2) Incorrect Chars While Char Recognition was Correct (b') Correct Chars While the 1st Ranking Chars were Incorrect ( b' = al - bl 21 22 4 10 7 171 5.3 Efficiency of Selection by Manual To examine the efficiency, an experiment has been conducted where sentences have been read both automatically and with the help of manual manipulation. The same text used in Section 5.2 was used in this experiment. Table 5 shows scores for the correctly pronounced characters. As shown in Table 5, 99.9% and 99.8~ of all characters were given correct pronunciation after the manual selection, while 99.3% and 99.0e~ of all characters had been given their correct pronunciation before the manual selection, respectively. These scores reveal that most mispronunciation could be recovered by manual selection so that nearly all accurately pronounced reading can be taped. Table 5. Scores for Characters. Total Characters 6,793 (100°~) << Input Data is Correct Characters >> Before Selection 6,745 (99.3%) After Selection 6,787 (99.9%) << Input Data is Recognized Characters >> Before Selection 6,728 (99.0°~) After Selection 6,777 (99.8°~) 6. Conclusion A sentence analysis method used in a Japanese book reading machine has been described. Input sentences, where each character is allowed to have other candidates, are analyzed by using several word dictionaries, as well as employing syntactical examinations. After generating network structure, heuristic rules are applied in order to determine the most desirable sentence used for speech information generation. The results of experiments reveal: 99.1% of all characters used in two whole books have been correctly converted to their pronunciation. Even when the character recognition result is ambiguous, correct characters can often be chosen by the sentence analysis method. By manual selection, most incorrect characters can be corrected. Currently, the authors are improving the sentence analysis method including 'the heuristic rules and the contents of dictionaries through book reading experiments and data examinations. This work is, needless to say, aimed in offering better quality speech to the blind users in a short.computing time. Authors are expecting that their efforts will contribute to the welfare field. ACKNOWLEDGEMENTS The authors would like to express their appreciation to Mr. S. Hanaki for his constant encouragement and effective advice. The authors would also like to express their appreciation to Ms. A. Ohtake for her enthusiasm and cooperation throughout the research. This research has been accomplished as the research project "Book-Reader for the Blind', which is one project of The National Research and Development Program for Medical and Welfare Apparatus, Agency of Industrial Science and Technology, Ministry of International Trade and Industry. REFERENCES << in English >> Allen, J., ed., 1986 From Text to Speech: The MITalk System. Cambridge University Press. Allen, J. 1985 Speech Synthesis from Unrestricted Text. In Fallside, F. and Woods, W.A., eds., Computer Speech Processing. Prentice-Hall. Allen, J. 1976 Synthesis of Speech from Unrestricted Text. Proc. IEEE, 64. Allen, J. 1973 Reading Machine for the Blind: The Technical Problems and the Methods Adopted for Their Solution. IEEE Trans., AU-21(3). Kabeya, K.; Hakoda, K.; and Ishikawa, K. 1985 A Japanese Text-To-Speech Synthesizer. Proe. A VIOS '85. Klatt, D.H. 1986 Text to Speech: Present and Future. Proe. Speech Tech '86. Klatt, D.H. 1982 The Klattalk Text-to-Speech System. Proe. ICASSP '8Z. Mitome. Y. and Fushikida, K. 1986 Japanese Speech Synthesis System in a Book Reader for the Blind. Proc. ICASSP '86. 1985 Kurzweil Reading Machine Update. Kurzweil Computer Products. << in Japanese >> Fukushima, T.; Ohyama, Y.; Ohtake, A.; Shutoh, T; and Shutoh, M. 1985 A sentence analysis method for Japanese text-to-speech conversion in the Japanese book reading machine for the 51ind. WG preprint, Inf. Process. Soc. Jpn., WGJDP 2-4. Mitome, Y. and Fushikida, K. 1985 Japanese Speech Synthesis by Rule using Formant-CV, Speech Compilation Method. Trans. Committee on Speech Res., Acoust. Soc. Jpn., $85-31. Tsuji, Y. and Asai, K. 1985 Document Image Analysis, based upon Split Detection Method. Tech. Rep., IECE Jpn., PRL85-17. Tsukumo, J. and Asai, K. 1985 Machine Printed Chinese Character Recognition by Improved Loci Features. Tech. Rcp., IECE Jpn., PRL85-17. 172
1986
24
JAPANESE PROSODIC PHRASING AND INTONATION SYNTHESIS Mary E. Beckman 1 and Janet B. Pierrehnmbert Linguistics and Artificial Intelligence Research AT&T Bell Laboratories, 600 Mountain Ave, Murray Hill, NJ 07974 ABSTRACT A computer program for synthesizing Japanese fundamental frequency contours implements our theory of Japanese intonation. This theory provides a complete qualitative description of the known characteristics of Japanese intonation, as well as a quantitative model of tone-scaling and timing precise enough to translate straightforwardly into a computational algorithm. An important aspect of the description is that various features of the intonation pattern are designated to be phonological properties of different types of phrasal units in a hierarchical organization. This phrasal organization is known to play an important role in parsing speech. Our research shows it also to be one reflex of intonational prominence, and hence of focus and other discourse structures. The qualitative features of each phrasal level and their implementation in the synthesis program are described. 1. INTRODUCTION In this paper, we will present a computer program for synthesizing fundamental frequency contours for standard Japanese. Fundamental frequency (fO) is the paramount physical correlate of the sensation of pitch, and, in many languages, the time course of f0 is one of the primary phonetic manifestations of intonation. This is especially true in Japanese, where duration and amplitude do not have the consequential role in communicating intonational structure that they do in English (Beckman, 1986). Accordingly, a program for synthesizing Japanese f0 contours is tantamount to a computational implementation of a theory of Japanese intonation. The theory that we have implemented in our synthesis program is based on a review of the literature in English and Japanese, and on the results of an extensive series of experiments in which we examined and made f0 measurements of about 2500 intonation contours in order to resolve some of the many problems not answered in the literature. These experiments have uncovered important facts about the hierarchical structure underlying Japanese prosody and about the manifestations of focus in Japanese. We have incorporated these discoveries in our synthesis program, which, we believe, covers all known qualitative characteristics of Japanese intonational melody. Informal listening tests by Japanese speakers indicate that the f0 contours which the program produces sound quite natural. In some cases, the synthesized contours were even preferred to the genuine human intonation contours on which they are modeled. Although the main concern of our research was to provide an accurate phonological and phonetic characterization of Japanese intonational structure that could be used in the automatic computation of f0 contours, our description of Japanese prosodic phrasing and intonation synthesis is also of direct relevance to issues in several other areas, including the role of prosodic 1. Present address: Ohio State University, Department of Linguistics, 1841 Millikin Rd, Columbus, OH 43210. phrasing in the parsing of speech, the relationship between intonational patterns and discourse phenomena such as focus, and the development of a more accurate understanding of the phonological mechanisms of intonation as a universal component of human speech. The computer implementation of the theory in turn should provide a practical tool for further research in these areas. These other background issues are discussed in Sections 1.1-1.3. Section 2 then summarizes the characteristics of Japanese intonation that we have incorporated in our synthesis program, and Section 3 gives a detailed account of the program itself. 1.1 Prosodic Phrasing and Syntactic Parsing Prosodic organization of the sort that we discovered for Japanese bears strongly on current issues in syntactic parsing. It is well known that intonational phrase boundaries can play a crucial role in parsing speech. For example, if the sentence in (1) is said without any internal phrase boundaries, it produces a garden path; the human parser interprets several bugs as the object of left, and then is unable to arrive at a syntactic role for the final verb phrase. (1) When we left several bugs in the program still hadn't been corrected. On the other hand, if the sentence is produced with the intonation break indicated by the comma in (2), several bugs is readily interpreted as the subject of the main clause. (2) When we left, several bugs in the program still hadn't been corrected. Intonation breaks can also be used to disambiguate sentences with ambiguous scope of negation or conjunction. Thus in example (3), the break represented by the comma forces the reading in which the scope of negation is the main verb clause (Because they were mad, they didn't leave), as opposed to the reading in which the scope of negation is the subordinate clause (It was not because they were mad that they left). (3) They didn't leave, because they were mad. Similarly in (4), the break after mnemonic rhyme prevents sublime from modifying free meter, whereas under the alternative phrasing in (5), sublime is taken to modify both conjuncts. (4) Sublime mnemonic rhyme, and free meter. (5) Sublime, mnemonic rhyme and free meter. In reviewing these examples, we have spoken as if there were only one type of intonational phrase boundary. And the most substantial current proposal about the role of intonational phrasing in the parsing of Japanese (Marcus and Hindle, 1985) takes into account only a single level of phrasing. In actuality, however, Japanese and English both have several different types of intonational phrase, which are related to each other hierarchically. ~ As Marcus and Hindle point out to us, major modifications to their proposal will be necessary to accommodate the role of the complete hierarchical intonational structure in parsing. 173 1.2 Focus and Discourse Structure Another major result of our experiments was to be able to describe the manifestations of focus in terms of the phonological structures we discovered. We use the word focus here in the sense of Chomsky (1971), to characterize words or phrases which are intonationally marked as prominent. This contrasts with usage in the AI literature, where the focus space is used to describe entities which are assumed to be salient with respect to a given discourse segment. However, the concepts are related to each other via the broader concept of the attentional structure, as described in Grosz and Sidner (1985). Broadly speaking, intonational prominence is used to modify the attentional state. A word or phrase that is marked by intonational prominence is made phonetically more salient; its prosodic coloring is more attention-demanding than it otherwise would be. One reason for a word or phrase to receive intonational prominence is that it refers to something which is being added to the focus space. Or, if the entity referred to is already in the focus space, the word or phrase may be made intonationally prominent because the referent is under contrast or in some other way plays a marked role in the utterance. The presence or absence of intonational prominence is thus very much analogous to the use of full referring expressions versus pronominal forms. The analogy breaks down, however, when the range of possible use is considered. Pronominal forms and other sorts of anaphora can be used in place of full referring expressions only in some syntactic categories and positions. Intonational prominence, by contrast, can be absent or present on any word. Therefore, the study of how intonational prominence is used promises to make crucial contributions to developing a theory of attentional structure. But an accurate controlled study of the use of intonational prominence is impossible without an exact characterization of the form of intonational prominence. A precise phonological and phonetic description of intonational structure is thus an important prerequisite to the development of theories of discourse structure. We also note that it is crucial to take focus, in the linguistic sense, into account in addressing the role of intonational phrasing in parsing. One of the main results of our experiments was the discovery that focus systematically affects prosodic phrasing in Japanese. Any parser intended for use with real speech must be able to accommodate the way in which focus and syntactic structure interact to determine the observed phrasing. 1.3 Japanese and English Intonation A final motivation for our description of Japanese was to Contribute to a more universal understanding of intonational structure. Our work is in some sense an extension of work on an earlier model of English intonation (Pierrehumbert, 1980, 1981; Liberman and Pierrehumbert, 1984; Anderson, Pierrehumbert, and Liberman, 1984). We first became interested in synthesizing f0 contours in Japanese because there are known to be formal differences between Japanese and English prosody. We wished to discover what aspects of a theory developed for English prosody would carry over to a language which differed in many ways, and how such shared principles would interact with language-specific principles. 1.3.1 Basic Principles -- One principle that can be assumed to be universal is the notion that intonation is separable from the text of an utterance not just physically but also linguistically. When a speaker produces an utterance with a give intonation pattern, he is implementing two separate strings of phonological elements in parallel. The textual string of distinctive segmental 2. Section 2 summarizes our results on the levels of phrasing found in Japanese. Beckman and Pierrehumbert (forthcoming) give a detailed comparison to the analogous levels of phrasing in English. events that is realized in the spectral patterns of the utterance is conceptually distinct from the string of distinctive melodic events that is realized in the f0 contour. The physical implementations of these two representational strings are coordinated by a phonological specification of the alignment between the textual events (phonemic segments and phrasal groups of segments) and the melodic events (tones and tone configurations). 1.3.2 English Tone Configurations -- In English, as is well known, there are two types of basic tone configurations. Some tone configurations, which are called pitch accents, are placed on especially prominent syllables in a phrase. If the placement of the special prominences shifts because of emphasis or focus, the pitch accents move along with them. Other tone configurations are placed at the edges of phrases without regard for the locations of the prominent syllables within the phrases. If the phrasing changes, these tones must also move. For both types of tone configuration, the speaker can select among several different patterns. His choice appears to convey a message about propositional attitude. For example, one pattern might suggest that the speaker is impatiently repeating what he feels should be obvious to the listener while another would imply that he is uncertain about the relevance of what he is saying, as illustrated in Figure I. 1.3.3 Stress -- Japanese phrasal prosody differs from English in several crucial ways. First, Japanese does not have lexical stress as English does. The prominent syllables that carry pitch accents in English are marked also by a rhythmic salience -- an extra duration and loudness that adds another sort of prosodic 350 325 500 275 250 :>25 200 175 180 125 I00 75 a. surpnse-redudoncy contour Hil Y L . on orange bo I I - gown 500 275 250 225 200 175 150 125 I00 75 - b. scooped accent L*+H L*+H on orange boll - gown Figure 1. Fundamental frequency contours for two intonation patterns for the utterance An orange ballgown. The tones in the melody are transcribed using the notation of Pierrehumbert (1980, 1981), with "*" for the tone in a pitch accent that associates to the stressed syllable and "°/o" for a boundary tone. Version (a) is a "surprise-redundancy contour" with a L* pitch accent on the stressed syllable in another, a H* pitch accent on orange, and a L% boundary tone. Version (b) implies uncertainty, with a scooped rising accent (L*+H) on each word followed by a L H% phrase-final boundary sequence 174 prominence to the intonational prominence of the pitch accent. Especially prominent elements in a Japanese utterance can also be longer and louder, but unlike in English, this rhythmic prominence is not a lexical feature. That is, words in Japanese do not have the lexieal markings of stress that in English give a rhythmic prominence to the first syllable in seven and the second syllable in eleven even in the absence of a pitch accent. Instead, Japanese has a lexical distinction between accented and unaccented words. 1.3.4 Japanese Lexlcal Accent -- Accented words have a fundamental frequency fall at some designated syllable; around the lexically designated location there is a sharp descent from a relatively higher pitch level to a relatively lower one. We represent this fall as a sequence of a high tone and a low tone, or H L, as illustrated in the following sehematization of the accented word yamaza 'kura: (6) yamaza'ku ra I IIL Here the line coming up from the H indicates that the high tone is associated to the designated syllable za'. That is, the realization of the H tone in the resulting f0 contour must occur concurrently with the production of the syllable's segments. The relatively lower pitch level of the L immediately following the associated H results in the pitch fall of the accent. Unaccented words differ from accented words in having no syllable designated to carry the H of the accent fall, and hence no lexically associated tone, as in: (7) mura sa ki i ro Since the presence or absence of an accent IIL sequence is a property of the component lexical items, an entire sentence may have no accents; this contrasts with the situation in English, where it is impossible to utter a sentence without placing a pitch accent on at least one syllable. 1.3.5 Choice of Tune and Phrasing -- Another important difference is that, utterance-internally in Japanese, there is no paradigmatic choice among different tone patterns to express differences in meaning such as uncertainty or impatient rejoinder. In other words, the shape of the accent IIL contour is a property of the lexical feature accented, and there is nothing corresponding to the choice of tone pattern for the pitch accent in English. At the end of the phrase, however, there is a distinction between rising and failing contours, which can convey the sort of meanings expressed by the choice of tone patterns at the edges of phrases in English. Because of the lexical origin of the phrase-internal tone features in Japanese, the system of phrasal intonation is relatively impoverished compared to English. Other than the limited choice of pattern type at the end of the phrase, the only dimensions of variation seem to be different choices of phrasing and of pitch range. Our experiments were designed to explore how phrasing is conveyed and what the consequences of local manipulations of pitch range are. 2. THE HIERARCHY OF PHRASE LEVELS In our data, we have found evidence for three levels of phrasing marked by f0 features. We call these three types of phrases the accentual phrase, the intermediate phrase, and the utterance. 2.1 The Accentual Phrase The lowest level, the accentual phrase, is a phrasal unit containing at most one accent. This unit may be a single word. However, when words are combined into sentences, it is quite usual for some to lose their status as separate accentual phrases. Noun-noun compounds typically form a single accentual phrase, as do adjective-noun sequences or sequences of direct object and governing verb. Apart from the possible occurrence of an accent, the hallmark of an accentual phrase is an f0 rise at its beginning. We account for this rise by positing a L% tone (the boundary L~ s marking the phrase boundary, and a H tone (the phrasal I~ associated with a designated syllable near the beginning of the phrase. If the sample accented and unaccented words shown above in (6) and (7) were produced as complete accentual phrases, they might be represented as in (8): (8) yamaza'ku ra mu ra sa ki i ro (L%) H HI, L% (L%) H L% The tones that we have represented here are the only ones we posit for the accentual phrase. 4 We interpret f0 patterns at places not occupied by the indicated tones as arising from a phonetic process which interpolates between the assigned target values for these tones. This notion of phonetic interpolation differs radically from more traditional representations of the accentual phrase. Studies of Japanese in the school of modern generative phonology have asserted that the accentual phrase is the domain of a process called tone spreading, whereby tones are copied from their originally specified places to associate to every syllable in the phrase. Thus in accented phrases, the L tone of the accent is made to associate with all syllables following the accent in the phrase. The H tone, conversely, is made to associate with all syllables preceding the accent, except possibly for the first, which might be associated instead to a L tone (corresponding to the L~ that we take as marking the preceding phrase boundary). In unaccented phrases, similarly, the phrasal H tone is thought to be associated to all the syllables after the first. These assumptions give rise to representations like those in (9). The phonetic prediction of such a representation is that a spread tone will be realized as a sustained pitch level over the syllables to whieh it is copied. (9) yamaza'ku ra mu ra sa ki i ro IVV L H L L I-I Our data, however, demonstrate that Japanese actually has no such rules of tone spreading. For example, in an utterance-medial unaccented phrase, there is a smooth fall from the phrasal H tone near the beginning of the phrase to the L% at the boundary before the next accentual phrase. The slope of this fall varies inversely with the separation of the two tones, as would be expected if a simple linear interpolation between fixed end point values were stretched to occupy a larger and larger distance. This generalization is illustrated in Figure 2, which shows f0 contours for segmentally matched unaccented sentence-medial phrases with 1, 2, 3, 5, and 6 syllables intervening between the phrasal H and the boundary L~ for the next accentual phrase. Slopes of regression lines fit over the H-L~o transition are indicated. The inverse correlation between these slopes and the number of syllables in the phrase is not compatible with the notion that the phrasal H tone has spread to associate with all following syllables up to the boundary L~0. It would arise naturally, however, by a 3. Here we use the % notation used by Pierrehumbert (1980) to designate a boundary tone. 4. Note that we put the first L% tone in each phrase in parentheses, because we consider it to be an edge feature of the preceding accentual phrase rather than of the accentual phrase being represented. 175 phonetic process which interpolates linearly between the values of the H on the second syllable and the L~. The finding that Japanese has no tone spreading is particularly significant, since most modern theories of phonology assume that surface phonological representations (those which are interpreted phonetically) are fully specified, meaning that a specific feature 'value must be assigned wherever a feature of some sort could be assigned a value. There has been considerable controversy about what phonological rules are necessary to generate the correct fully specified representations. Our results show, however, that at least for tone, the surface representations are only partially specified. That is, only some of the syllables that could in theory be assigned tonal values actually have associated tones. This is consistent with a view in which the surface representations are merely descriptions of the phonetic form, in a spirit similar to what Marcus et al. (1983) have proposed for surface syntactic representation. 2.2 The Intermediate Phrase The partially specified tone patterns at the accentual phrase level are grouped together prosodically into units at the next higher level of phrasing, that of the intermediate phrase. An intermediate phrase consists of one or more accentual phrases (only rarely more than three). An intermediate phrase boundary is often marked by a pause or p6endo-panse (a pre-pausal "winding down" of production speeds unaccompanied by any actual momentary cessation of production). Also, the L~ boundary tone for the last accentual phrase in an intermediate phrase is markedly lower than at a medial accentual phrase boundary. Perhaps the most salient and systematic characteristic of the intermediate phrase, however, is that it is the domain of a process known as eatathesis. Catathesis compresses the pitch range following an accent. This compression affects all tones up to the intermediate phrase boundary, but it does not propagate to the tones belonging to the following intermediate phrase. 5 If an intermediate phrase contains more than one accent, the multiple applications of catathesis cumulate, so that the pitch range can be extremely compressed by the end of the phrase. An important finding of our experiments is that phrasing at this level is a fairly reliable indicator of focus. Even in syntactic structures where no phrase break is normally expected in neutral renditions, focus will introduce an intermediate phrase boundary right before the focused word or phrase. For example, in one of our experiments, subjects consistently introduced an intermediate phrase boundary between the words in an adjective-noun sequence when the discourse context gave the noun a contrastive emphasis. Often this striking use of phrasing was accompanied by local expansion of the pitch range on the focused item, affecting the f0 values of its phrasal H, accent tones, and boundary L~0. In a sizeable number of utterances, however, the change in phrasing was the only consequence of focus. We suspect that this relationship between phrasing and focus reveals something about the prominence structure internal to the intermediate phrase. In English, the last accented item in a phrase is generally agreed to be the strongest one. If, in Japanese, the strongest item in a phrase is instead in first position, one strategy for marking intonational prominence would be to structure the phrasing of the utterance so as to place the focused item at the beginning of an intermediate phrase. In English, focused items are sometimes set off by phrase boundaries in this way, but this use of phrasing is not nearly as characteristic as the manipulation of local pitch range and of syllable duration and amplitude to put a stronger rhythmic "beat" on the lexically stressed syllable. We believe that this contrast between English 8. Catathesis does affect the L% at the boundary between two intermediate phrases. This is why we consider the L% to be a property of the end of the preceding accentual phrase rather than of the beginning of the next accentual phrase, as shown above in representation (8). -i .268376 127731 ....... -0.55oao7 -0. 4200854 mnriv~ no mawari no nma'war~.q~n Figure 2. Fundamental frequency contours for five segmentally matched unaccented phrases with varying numbers of syllables between the phrasal H and the boundary L%. The dashed line in each panel is a regression curve fit to the f0 values between the two tones, and the number in the upper right is the slope of the regression curve. and Japanese is related to a difference in prosodic structure. The focused item in Japanese cannot be made more prominent by manipulating the rhythmic prominence of the stressed syllable, because Japanese does not have stress in the sense that English does. 2.3 The Utterance Our third level of phrasing is the utterance. The phonological mark of an utterance is that it has an initial L% boundary tone. It is also the type of phrase which can be ended with a question rise, a pattern which we account for by the insertion of a H% boundary following the L% ending the last accentual phrase. In our experiments, the utterance also seemed to be the domain for two phonetic processes affecting the pitch range. One is declination, which gradually lowers the pitch range as a function of distance from the beginning of the utterance. Unlike catathesis, it operates without regard to what tones are present. The other is final lowering, which further lowers the pitch range in anticipation of the end of the utterance. Questions exhibit declination but not final lowering. There is some reason to suppose that they are subject to final raising, which expands the pitch range at the end of the utterance. In particular, the H% boundary tone ending a question is considerably higher than H tones elsewhere in the sentence. Final lowering is seen in English as well as in Japanese, and was originally supposed to define a comparable utterance level there. More recently, Hirschberg and Pierrehumbert (1986) have 176 proposed that final lowering is not a prosodic property specific to a particular phonological phrase level in English, but rather is a more direct phonetic expression of discourse structure. We now suspect that final lowering in Japanese is similar, and in Beckman and Pierrehumbert (forthcoming), we suggest that declination also is such a paralinguistic discourse phenomenon. In the current implementation of the intonation synthesizer we treat final lowering and declination as utterance-level properties. On the other hand, we do make the amount of lowering in each utterance a user-controllable variable, so that it should not be difficult to test these more recent suggestions. 2.4 Other Miscellaneous Effects In addition to the various phrase-specific f0 features discussed so far, there are certain other qualitative differences among tones. For example, our experiments showed that the H tone of the lexical accent is generally higher than the phrasal H of the accentual phrase. We account for this difference by giving the accent H intrinsically more tonal prominence. That is, we automatically assign it a higher target value within the local pitch range. Another important effect is that when the initial syllable in the following accentual phrase is lexically long or accented, the preceding boundary L~o is weak. That is, it undergoes a phonetic lenition that causes the tone to be realized in the f0 contour with only a very short duration and with a target f0 value that is relatively higher than it otherwise would be. (As in English, low tones are made more tonally prominent by lowering.) Finally, the tonal prominence of a boundary L% reflects the boundary strength; the L~o boundary tone is more tonally prominent (lower) at an intermediate phrase than at a mere accentual phrase boundary, and still more prominent at an utterance boundary. 3. THE F0 SYNTHESIZER The phrasal f0 features outlined thus far are generated automatically by our synthesis program from a user-provided script that identifies the locations of the appropriate phrase boundaries and lexieally determined accents in the time pattern of speech segments for an utterance. Thus at the accentual phrase level, the synthesizer inserts the phrasal H and boundary L~ at the appropriate places relative to the phrase ends, and assigns the H of the accent to the designated syllable along with the accent L at the appropriate time delay. At the intermediate phrase level, the program triggers a compression of the pitch range at each accent, lowering the values of all subsequent tones until the end of the phrase. And at the utterance level, it sequentially lowers the f0 values of the tones to generate the rule-prescribed time courses of declination and final lowering. The techniques used to implement these effects are quite similar to those used in the English synthesizer developed earlier by Anderson, Pierrehumbert, and Liberman (1984), and are applied in the same order. 3.1 The Schematlzed f0 Contour First, the input routines parse the user-provided script, filling in system defaults for unspecified values to produce a set of values for speaker variables and phrasal structures. Once the the script has been interpreted, the next step is to construct a schematic version of the f0 contour in which tones appear as level stretches. The values that must be computed in constructing the schematic are the temporal location of each stretch and its duration and f0 value. 3.1.1 Timing -- The location and duration of each tone is determined by the time pattern of the speech segments, and by our theory of the rules which align tones with segments. For example, the stretch for a medial L% begins at the end of the last segment before the relevant phrasal boundary. The difference in timing between a weak L~ and a strong L~v (see Section 2.4) is accomplished by giving a weak L~o only a point duration and a strong L~o the "standard tone duration" (a speaker- and rate- specific value roughly the length of a short syllable). The beginning of the following phrasal H can then be located immediately after the end of the L~o. In the present version of the synthesizer, the "standard tone duration" is the only possible duration for a tone that is not a point. The user can specify its actual millisecond value in his script for the utterance, or he can include it in a file of user- defined defaults for the speaker, or, if the system-provided default is appropriate for the speaker and rate, he can leave the vMue unspecified. 6 The locations and types of the various phrase boundaries and the location of the accent, on the other hand, are specific to an utterance, and must be specified by the user in the utterance script. 3.1.2 Rules for the f0 Value -- The f0 value of each tone is determined by the interaction of relationships such as the following: High versus Low: A low tone is lower than a high tone in the same local pitch range setting. Intrinsic prominence of accents: The H in an accent is higher than the phrasal H tone. Boundary tone weakening: The L~o boundary tone is higher if the first syllable of the upcoming phrase is long or accented. Boundary strength: The L~o boundary tone is lower at an intermediate phrase boundary than at an accentual phrase boundary, and lower yet at an utterance boundary. In the synthesizer, all of these qualitative differences have been made precise, with numerical values for the various relations estimated from the results of our experiments. Obviously, several rules interact to control the value for any single tone. For instance, a boundary tone might be raised because the following phrase begins with a long syllable, but lowered because it is at an intermediate phrase boundary. 3.1.3 The Tone-Scallng Domain -- The tone-scaling domain within which these rules operate is a normalized transformed hertz domain, which reflects the overall choice of pitch range and the intonational prominence of each accentual phrase. The lower bound of the tone-scaling domain is defined by a reference line (r), which is set to the lowest value in the speaker's range. The upper bound of the overall pitch range is a high-tone llne for the intermediate phrase (h), which is set to the highest possible H tone vMue in that phrase. The size of the overall pitch range is thus h-r. By raising h, this overall pitch range is expanded for "speaking up" (as it would be in natural speech if the speaker is excited or projecting his voice). Various uses of this tone-sealing domain are illustrated in Figure 3. For example, eatathesis is realized as a proportional compression of the overall pitch range that reduces the value of h at each accent according to the formula: * -r)+r it<l] (I0) hne w = c (hal d Note that in this equation the proportional reduction of h is normalized to the overall pitch range, so that it can be expressed as a constant value e. The prominences of different accentual phrases relative to the strongest element in the intermediate phrase are also normalized to this overall pitch range, so as to be readily interpretable and easily specified by the user. A local tone-sealing domain is calculated for each accentual phrase on the basis of its relative prominence. (This can be thought of as setting a local accentual- phrase value for the high-tone line ha, as illustrated in Figure 3.) 6. These three options are available also for other underived variables such as the position relative to the end of the utterance where final lowering should begin. 177 The relations among tones described above are then similarly expressed as prominence values normalized to this local tone- scaling domain. In this way the relationships can be expressed as speaker-specific constants despite changes in overall pitch range and local focus, and interactions among them can be multiplicative within the tone-scaling domain. Within the local tone-scaling domain, H tones are scaled upward and L tones are scaled downwards. That is, prominence values for H tones increase from 0 to 1 as f0 goes up from r to h, whereas those for L tones decrease from 1 to 0, as indicated by the different prominence scales to the right of the transformed hertz domain in Figure 3. Our use of this transformed hertz domain follows broadly the conceptual structure for English tonal scaling developed in Liberman and Pierrehumbert (1984). Differences between the two models appear to reflect differences between Japanese and English. For example, many English L tones appear below the reference line whereas Japanese L tones are all realized above it, in the same overall region as H tones. Of the various quantitative values used in tone scaling, those of the reference line, of the high-tone line, of the catathesis ratio constant, and of the other constants for the relations among tones are all speaker variables like the "standard tone duration" for timing. Therefore, they are implemented in the synthesizer as variables that can be specified in the utterance script or in a separately provided defaults file, and which revert to the system default value if left unspecified by the user. The prominence HI 200 il j,, Oi IBO L*/* H 1ffil70 140 120 I00 • =95 80 i2 ~f 02 a3 C'~'--~'-~"C'--~~;C-P(H) P(L) -I.0 -0.0 h=140 - OTf - 02.5 -05 -0.5 - 02, s - Q75 0.0 ' - 1.0 Figure 3. Tone-scaling domain with f0 values computed for the first nine tones in the utterance mayumi-wa ANA'TA-ni aima'sita ka? ('Did Mayumi meet YOU?'). Braces at top show the accentual phrase and intermediate phrase grouping. The reference line is 95 Hz and the high-tone line is 170 until reduced by-the catathesis at the accent in ana'ta. Values for the y-axis are hertz on scale to left, and H-tone and L-tone prominences (as scaled in the initial pitch range) on scales to right. Labeled arrows illustrate the application of representative tone scaling rules. (1) Boundary strength at utterance-initial boundary: L~o(u)=0.7. (2) Boundary strength at intermediate-phrase boundary: L%(i)=0.6. (3-4) Relationship between phrasal H and accent H: accent H=I.0, phrasal H=0.8. (5) Catathesis constant is 0.6 and reduces high-tone line to 140 Hz. (6) Boundary strength at accentual- phrase boundary with weak L% tone because of long initial syllable in aima'sita: L%(a)=0.5, weak L%=0.85; weak L%(a)=0.5*0.85=0.425. (7) Accentual phrase aima'Mta is subordinated to the focused accentual phrase ana'ta-ni by P=0.8, which locally compresses the tone-scaling domain by making a reduced local high-tone line: h =131 Hz. value of each accentual phrase, on the other hand, is specific to its particular degree of subordination to the head of its intermediate phrase, and must be specified in the utterance script. 3.2 The Finished f00ontour When the tones have been located in time and frequency, several adjustments are made to produce a finished natural intonation contour from the schematized f0 contour. First, the tones are connected by linear interpolation, as shown in Figure 4a. Declination now applies, as well as final lowering in declaratives (Figure 4b). The resulting contour is then smoothed by convolution with a square window of roughly syllable width. 7 Step functions in f0 now appear more realistically as gradual rises (Figure 4e). Finally, a small amount of random jitter is added to prevent the occurrence of unnaturally flat sections and unnaturally smooth ramps, and the f0 value is set to zero during portions corresponding to voiceless segments (Figure 4d). In order to listen to the results, the computed f0 contour is then substituted for the natural contour in an LPC-coded version of the utterance, and the speech is resynthesized. CONOLUSION The model of Japanese intonation implemented in the synthesis program accounts for all of the characteristics of Japanese intonational structure that we have been able to document in our experiments. Some future modifications to the model will probably be necessary as we learn more about how the highest level of phrasing behaves in long connected passages. For example, as noted above, we suspect on the basis of recent work on English (Hirschberg and Pierrehumbert, 1986) that some of the characteristics that we have identified with the utterance in the present model are actually reflections of discourse structure rather than features specific to a well-defined type of unit within the hierarchy of prosodic phrases. Constructing the f0 synthesizer has been useful in confirming our phonological and phonetic model of Japanese intonation. We believe that the synthesizer will also be useful in generating controlled materials for investigating the use of intonational prominence and the role of phrasing in parsing speech. ACKNOWLEDGEMENTS Ken Church, Julia Hirschberg, and Mitch Marcus gave useful comments on earlier drafts of this paper. APPENDIX: GLOSSARY eatathesis. A sudden compression of pitch range that is triggered by a particular tonal configuration, and that lowers all tones following the trigger within some phrasal unit. In Japanese, catathesis is triggered by every accent, and in English, by every bitonal pitch accent. declination. "A gradual lowering of the pitch range that is effected as some function of time from the beginning of an utterance without regard to the tonal structure. final lowering. A gradual lowering of the pitch range starting at some distance from the end of the utterance. fundamental frequency. The reciprocal of the period in a periodic signal, and the main physical correlate of pitch. Fundamental frequency is abbreviated fO and is measured in periods per second (unit hertz). In speech, f0 corresponds to the frequency of vibration of the vocal cords during voiced segments. H. A high tone. The rates of the declination and of the final lowering and the size of the smoothing window are speaker- and rate-specific variables like the reference line, and are treated in the same way in the synthesis program. 178 hlgh-tone llne. In Japanese tone-scaling, the upper bound of the pitch range. Its f0 value corresponds to that of a hypothetical highest possible H tone in that range. intonational phrase. A prosodic unit delimited phonologically by some sort of intonational feature such as a boundary tone. L. A low tone. LPC coding. A specification of the spectral characteristics of a signal in terms of sets of linear predictor coefficients at fixed 150 125 I00 a. linear interpolation L% H L% H HLL% H HL L% H% 1"75 150 125 I00 - b. declination L% H L% H HL L% H HL L% H% C. smoothing /--.. / \ 150 125 I00 d. adjustment for voiceless segments and jitter L% H L% H HL L% H HL L% H% moyumi wo onotto ni aimo'sifo ko? ~z5 t ] IOOl-e. original intonation Figure 4. Adjustments for making a finished f0 contour from schematic tone level stretches for utterance shown in Figure 3. (1) Linear interpolation fills in unspecified values between tones. (2) Declination applies, but not final lowering, because the utterance is a question ending in a H% boundary tone. (3) The contour is smoothed by convolution with a syllable-sized square window. (4) Jitter is added and f0 values excised during voiceless segments It], Ill, and [k I. (5) The f0 contour of the original utterance is shown for comparison with (4). intervals. An nth-order analysis of the signal is obtained by a least squares estimation of successive samples within an analysis frame from the linear combination of the last n samples. The set of predictor coefficients for each analysis frame can then be used as a filter for an input pulse train to synthesize a new signal with the same spectral pattern and an arbitrarily different f0 pattern. pitch accent. A tonal configuration that is associated to a designated syllable in an utterance, and that marks the syllable (or the word containing the syllable) as accented or intonationally prominent. In Japanese, accent consists of a pitch fall from H tone to L at a lexically designated syllable in a word. In English, an accent is any one of six tonal patterns (H*, L*, H*+L, L*+H, H+L*, L+H*) that can be associated to a lexically designated syllable. pitch range. The spread of fundamental frequency between the "floor" of a speaker's voice and the highest f0 appropriate to the occasion. Linguistic factors such as prominence or intonational focus (see Section 1.2) can locally affect pitch range, but it is determined overall by paralinguistic factors such as degree of animation and projection; the overall pitch range is raised or expanded when the speaker "speaks up" to project his voice, or when he is excited. prosody. The rhythm and melody of speech as specified phonologically in the representation of its phrasal organization and intonational structure, and as realized phonetically in duration and loudness and pitch patterns. reference line. In Japanese tone-scaling, the bottom of the pitch range, corresponding to the lowest possible f0 value for a tone in a speaker's pitch range. standard Japanese. The speech of educated Tokyo speakers, as prescribed by the Japanese Broadcasting Corporation. stress. A local non-tonal prominence on a lexically designated syllable in an English word, which is realized phonetically in the rhythmic pattern of relative lengths and loudnesses, and also by certain segmental patterns such as vowel and consonant lenition. tone. The basic phonological element representing distinctive events in the melody -- i.e., the melodic counterpart of a phonemic segment in the text string. We believe that these melodic segments are target pitch level specifications such as "hiuh" and "low" rather than specifications of pitch change such as "rise" and "fall". (See Pierrehumbert and Beckman (forthcoming) for detailed arguments on this point.) In both English and Japanese, there are two tone types -- H and L -- and the type of each tone in an utterance, and its temporal location and f0 value reflect the prosodic phrasing and intonational focus structure of the utterance. REFERENCES Anderson, Mark D., Janet B. Pierrehumbert, and Mark Y. Liberman. 1984. "Synthesis by Rule of English Intonation Patterns." Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. Beckman, Mary E. 1986. Towards Phonetic Criteria for a Typology of Lexieal Accent. Netherlands Phonetic Archives No 7, Foris Publications. Beckman, Mary E., and Janet B. Pierrehumbert. forthcoming. "Intonational Structure in Japanese and English." Phonology Yearbook, Vol. 3. Chomsky, N. 1971. "Deep structure, surface structure, and semantic interpretation." In D.D. Steinberg and L.A. Jakohovits, eds., Semantics: An Interdisciplinary Reader in Philosophy, Linguistics, and Psychology. Cambridge 179 University Press, Cambridge, 183-216. Grosz, B. and C. L. Sidner. 1985. "The Structures of Discourse Structure." 6097, BBN Laboratories and Technical Note #369 from the AI Center, SRI International. To appear in Computational Linguistics. Hirschberg, Julia, and Janet Pierrehumbert. 1986. "The Intonational Structuring of Discourse." This volume. Liberman, Mark, and Janet Pierrehumbert. 1984. "Intonational Invariance under Changes in Pitch Range and Length." In M. Aronoff and R.T. Oehrle, eds., Language Sound Structure. MIT Press. Marcus, M., D. Hindle, and M. Fleck. 1983. "D-Theory: Talking about Talking about Trees." Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, 129-136. Marcus, M., and D. Hindle. 1985. "A computational account of extra-categorial elements in Japanese." Paper distributed at the SDF Japanese Syntax Workshop, UCSD, San Diego, March 1985. Pierrehumbert, Janet B. 1980. The Phonology and Phonetics of English Intonation. MIT dissertation. --. 1981. "Synthesizing Intonation," Journal of the Acoustical Society of America, 70: 985-995. Pierrehumbert, Janet B., and Mary E. Beckman. forthcoming. "Japanese Tone Structure." Paper submitted to Linguistic Inquiry. 180
1986
25
FORUM ON CONNECTIONISM Questions about Connectionist Models of Natural Language Mark Liberman ATS~T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 MODERATOR STATEMENT My role as interlocutor for this ACL Forum on Connec- tionism is to promote discussion by asking questions and making provocative comments. I will begin by asking some questions that I will attempt to answer myself, in order to define some terms. I will then pose some questions for the panel and the audience to discuss, if they are interested, and I will make a few critical comments on the abstracts sub- mitted by Waltz and Sejnowski, intended to provoke responses from them. I. What is a "connectionist" modeff The basic metaphor involves a finite set of nodes inter- connected by a finite set of directed arcs. Each node trans- mits on its output arcs some function of what it receives on its input arcs; these transfer functions are usually described parametrically, for instance in terms of a linear combination of the inputs composed with some nonlinear threshold-like function; the transfer function may involve a random vari- able. A subset of the nodes (or arcs) are designated as inputs and/or outputs, whose values are supplied or used by the "environment." "Time" is generally quantized and treated in an idealized way, as if all connections involved a transmission delay ex- actly equal to the time quantum; this is presumably done for convenience and tractability, since neural systems are not like this. The nodes' transfer function may contain some sort of memory, e.g. an "activation level." The state of the network at time step t determines its state at time step t+l (at least probabilistically, if random variables are involved); the network calculates its response to a change in its input by executing a sequence of time-steps sufficient to permit in- formation to propagate through the required number of nodes, and to permit the system to attain (at least approximately) a fixed point, that maps back into itself or into a state sufficiently close. Thus the system as a whole is usually defined so that it will settle into a static configuration for a static input pat- tern;~(models whose dynamics exhibit limit cycles or chaotic sequences are easy to devise, but I am not aware that they have been used). Connectionist models fat least those with static fixed points) define a relation on their set of input/output node values. Without further constraints on the number of hidden nodes, the nodes' transfer function, etc., the defined relation can obviously be anything at all In fact, the circuits of a conventional digital computer can obviously be described in terms that make them "connectionist" in the very general sense given above. The most interesting connectionist models, such as the so-called "neural nets" of Hopfield and Tank, or the "Boltzmann machine," are defined in much more specific ways. II. How can we categorize and compare the many different types of such models that have been proposed? The situation is reminiscent of automata theory, where the basic metaphor of finite control, read/write head(s), in- put and output tape(s) has many different variations. The general theory of connectionist machines seems to be at a relatively early stage, however. Some particular classes of machines have been investigated in detail, but at the level of generality that seems appropriate for this panel, a general mathematical characterization does not exist. Some crude distinctions seem worth making: Some models "learn" while others have to he programmed in every detail. This is a gradient distinction, however, since the "learning" models require an appropriate network architecture combined with an appropriate descrip- tion and presentation of the training material. Some models represent category-like information dif- fusely, through ensembles of cooperating nodes and arcs, while others follow the principle of "one concept, one node." III. Why are (some) connectionist models interesting. ~ The term "interesting" is obviously a subjective one. The list that follows expresses my own point of view. 1. Connectionist models are vaguely reminiscent of neurological systems. The analogy is extremely loose, at best; neuronal circuits are themselves apparently quite diverse, but they all share properties that are quite different from the con- nectionist models that are generally discussed. Still, it may be that there are some deep connec- tions in terms of abstract information-processing methods. 2. Connectionist information processing is generally parallel and cooperative, with all calculations completed in a small humbler Of time steps. For certain kinds of algorithms, network size scales gracefully with problem size, with at worst small time penalties. 3. In some cases, learning algorithms exist: training of the network over appropriate input/output patterns causes the network to remember the patterns and/or to "summarize" them according to statistical measures that depend on the net- work structure and the training method. The trained network "generalizes" to new cases; it generalizes appropriately if the new cases fit the design implicit in the network structure, the training method, and the training data. The same mechanisms also give the system some capacity to complete or correct patterns that are incom- plete or partly errorful. 181 4. Some models (especially those that learn and that represent patterns diffusely) blur distinctions among rule, memory, analogy. There need be no formal or qualitative distinction between a generalization and an exception, or between an exception and a subregularity, or between a literal memory and the output of a calculation. For some cognitive systems (including a number relevant to natural language) this permits us to trade the possibly harmful consequences of giving up on finding deeper generalizations for the im- mense relief of not looking for perfectly regular rules that aren't there. 5. Some aspects of human psychology can be nicely modeled in connectionist terms -- e.g., semantic priming, the role of spaced practice, frequency and recency effects, non-localized memory, res- toration effects, etc. 6. Since connectionist-like networks can be used to build arbitrary filters and other signal-processing systems, it is possible in principle to build connec- tionist systems that treat signals and symbols in an integrated way. This is a tricky point -- an or- dinary general-purpose computer reduces a digital filter and a theorem-prover to calculations in same underlying instruction set, so the putative integration must be at a higher level of the model. IV. What do connectlonlst models have to tell us about the structure of infinite sets of strings? So far, well-defined connectionist models all deal with relations over a finite set of elements; at least, no one seems to have shown how to apply such models systematically to the infinite sets of arbitrarily-long symbol-sequences that form the subject matter of classical automata theory. Connectionist models can deal with sequences of symbols in at least two ways: the first is to connect the symbol se- quence to an ordered set of nodes, and the second is to have the network change state in an appropriate way as successive symbols are presented. In the first mode, can we do anything that adds to our understanding of the algorithms involved? For instance, it seems straightforward to implement a parallel version of standard context-free parsing algorithms, by laying out a 2D matrix of cells (corresponding to the set of substrings) for each of the nonterminal symbols, imposing connectivity along the rows and up the columns for calculating immediate domination relations, and so on. Can such an architecture be persuaded to learn a grammar from examples? It is limited to sentences of fixed maximum length -- is this enough to make learning possible? Under what circumstances can the result- ing "trained" network be extended to longer inputs without retraining? Are there more interesting spatial-layout parsing models? Many connectionist models are "finite impulse response" machines; that is, the consequences of an input pattern "die out" after the pattern is removed, and the network's propen- sity to respond to further patterns is left unchanged. If this characteristic is removed, and the network is made to cal- culate by changing state in response to a sequence of inputs, we can of course imitate classical automata in a connec- tioniat framework. For instance, a push down store can be built out of connectionist piece parts. Can a connectionist ap- proach to processing of sequentially presented information do something mote interesting than this? For instance, can the potentially very complex dynamics of of such networks be exploited in a useful way? V. Comments on Sejnowski In evaluating Sejnowski's very interesting demonstration of letter-to-sound learning, it is worth keeping a few facts in mind. First, the success percentages reported are by letter, not by word (according to a personal communication from Sejnowski). Since the average word length was presumably about 7.4 (the average length of the 20000 commonest words in the Brown corpus), the success rate by word of the generalization from the 1000-word set to the 20000-word set must have been approximately .8A7.4, or about 19~. With the "additional training" (presumably training on the same set it was then tested on), the figure of 92% translates to .92A7.4, or about 54~o correct by word. Second, the training did not just present words and their pronunciations, but rather presented words and pronuncia- tions with the correspondences between letters and phonemes indicated in advance. Thus the network does not have to parse and/or interrelate the two symbol sequences, but only keep track of the conditional probability of various possible translations of a given letter, given the surrounding letter se- quences. My guess is that a probabilistic n-gram-based transducer, trained in exactly the same way (except that it would only need to see each example once), would outper- form Sejnowski's network. Thus the interesting thing about Sejnowski's work is not, I think, the level of performance (which is not competitive with conventional approaches) but some perhaps lifelike aspects of its mode of learning, types of mistakes, etc. The best conventional letter-to-sound systems rely on a large morph lexicon (Hunnicutt's "DECOMP" from MITalk) or systematic back-formation and other analogical processes operating on a large lexicon of full words (Coker's "nounce" in the current Bell Labs text-to-speech system). Coker's sys- tem gives 100°~ coverage of the dictionary, in principle; more interestingly, it gives better than g9~ (by word) coverage of random text, despite the fact that only about 80°7oo of the words are direct hits. In other words, it is quite successful at guessing the pronunciation of words that it doesn't "know" by analogy to those that it does. To take an especially trivial, but very useful, example, it is quite good at decom- posing unknown compound words into pairs of known words, with possible regular prefixes and suffixes. Thus I have a question for Sejnowski: what would be in- volved in training a connectionist network to perform at the level of Coker's system? This is a case that should be well adapted to the connectionist approach -- after all, we are dealing with a relation over a finite set, training material is easily available, and Coker's success proves that the method of generalizing by analogy to a large knowledge base works well. Given this situation, is the poor performance of Sejnowski's network due only to its small size? Or was it set up in a way that prevents it from learning some relevant morphographemic generalizations? VI. Comments on Waltz Waltz is very enthusiastic about the connectionist future. I agree that the possibilities are exciting. However, I think that it is important not to depreciate the future by oversell- ing the present. In particular, Waltz's statement that Sejnowski's NET- talk "learned the pronunciation rules of English from examples" is a bit of a stretcher -- [ would prefer something like "summarized lists of contextual letter-to-phoneme cor- respondences, and generalized from them to pronounce about 20% of new words correctly, with many of its mistakes being psychologically plausible ones." 182 Waltz comments that connectionist models "promise to make the integration of syntactic, semantic, pragmatic and memory models simpler and more transparent." The four- way categorization of syntax, semantics, pragmatics, and memory strikes me as an odd way of dividing the world up; but I agree with what I take to be Waltz's main point. A little later he observes that "connectionist learning models... have demonstrated surprising power in learning concepts from example.." I'm not sure how surprising the accomplish- ments to date have been, but I agree that the possibilities are very exciting. What are the prospects for putting the "integrated processing" opportunities together with the "learning" opportunities? If we restrict our attention to text input rather than speech input, then the most interesting issues in natural lan- guage processing, in my opinion, have to do with systems that could infer at least the lexical aspects of linguistic form and meaning from examples, not just for a toy example or two, but in a way that would converge on a plausible result for a major fraction of a language. Here, few of the basic questions seem to have answers. In fact, from what I have seen of the'literature in this area, many of the questions remain unposed. Here are a few of the questions that come to mind in rela- tion to such a project. What would such a system have to learn? What kind of inputs would it need to learn it, given what sort of initial expectations, represented how? How much can be learned without knowledge of non-linguistic aspects of meaning? How much of such knowledge can be learned from essentially linguistic experience? Are current connectionist learning algorithms adequate in principle? How big would the network have to be? Is a non-toy version of such a system computationally tractable today, assuming it would work in principle? If only toy versions are tractable, can anything be proved about how the system would scale? 183
1986
26
FORUM ON CONNECTIONISM Language Learning in Massively-Parallel Networks Terrence J. Sejnowski Biophysics Department Johns Hopkins University Baltimore, MD 21218 PANELIST STATEMENT Massively-parallel connectionist networks have tradition- ally been applied to constraint-satisfaction in early visual processing (Ballard, Hinton & Sejnowski, 1983), but are now being applied to problems ranging from the Traveling- Salesman Problem to language acquisition (Rumelhart & MeClelland, 1986). In these networks, knowledge is represented by the distributed pattern of activity in a large number of relatively simple neuron-like processing units, and computation is performed in parallel by the use of connec- tions between the units. A network model can be "programmed" by specifying the strengths of the connections, or weights, on all the links be- tween the processing units. In vision, it is sometimes possible to design networks from a task analysis of ~he problem, aided by the homogeneity of the domain. For example, Sejnowski & Hinton (1986) designed a network that can separate figure from ground for shapes with incomplete bounding contours. Constructing a network is much more difficult in an in- homogeneous domain like natural language. This problem has been partially overcome by the discovery of powerful learning algorithms that allow the strengths of connection in a network to be shaped by experience; that is, a good set of weights can be found to solve a problem given only examples of typical inputs and the desired outputs (Sejnowski, Kienker & Hinton, 198{}; Rumelhart, Hinton & Williams, 198{}). Network learning will be demonstrated for the problem of converting unrestricted English text to phonemes. NETtalk is a network of 309 processing units connected by 18,629 weights (Sejnowski & Rosenberg, 1986). It was trained on the 1,000 most common words in English taken from the Brown corpus and achieved 98% accuracy. The same net- work was then tested for generalization on a 20,000 word dictionary: without further training it was 80% accurate and reached 92% with additional training. The network mas- tered different letter-to-sound correspondence rules in vary- ing lengthsJof time; for example, the "hard e rule", c -• /k/, was learned much faster than the "soft c rule", c -> /s/. NETtalk demonstrably learns the regular patterns of English pronunciation and also copes with the problem of ir- regularity in the corpus. Irregular words are learned not by creating a look-up table of exceptions, as is common in com- mercial text-to-speech systems such as DECtalk, but by pat- tern recognition. As a consequence, exceptional words are in- corporated into the network as easily as words with a regular pronunciation. NETtalk is being used as a research tool to study phonology; it can also be used as a model for studying acquired dyslexia and recovery from brain damage; several interesting phenomena in human learning and memory such as the power law for practice and the spacing effect are in- herent properties of the distributed form of knowledge representation used by NETtalk (Rosenberg & Sejnowski, 1986). NETtalk has no access to syntactic or semantic infor- mation and cannot, for example, disambiguate the two pronunciations of "read". Grammatical analysis requires longer range interactions at the level of word representations. However, it may be possible to train larger and more sophis- ticated networks on problems in these domains and incor- porate them into a system of networks that form a highly modularized and distributed language analyzer. At present there is no way to assess the computational complexity of these tasks for network models; the experience with NETtalk suggests that conventional measures of complexity derived from rule-based models of language are not accurate in- dicators. REFERENCES Ballard, D. H., Hinton, G. E., & Sejnowski, T. J., 1983. Parallel visual computation, Nature 306: 21-26. Rosenberg, C. R. & Sejnowski, T. J. 1986. The effects of distributed vs massed practice on NETtalk, a massively- parallel network that learns to read aloud, (submitted for publication). Rumelhart, D. E., Hinton, G. E. & Williams, R. J. 1986. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Edited by Rumelhart, D. E. & McClelland, J. L. (Cambridge: MIT Press.) Rumelhart, D. E. & McClelland, J. L. (Eds.) 198{}. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. (Cambridge: MIT Press.) Sejnowski, T. J., Kienker, P. K. & Hinton, G. E. (in press) Learning symmetry groups with hidden units: Beyond the perceptron, Physica D. Sejnowski, T. J. & Hinton, G. E. 1986. Separating figure from ground with a Boltzmann Machine, In: Vision, Brain & Cooperative Computation, Edited by M. A. Arbib & A. R. Hanson (Cambridge: MIT Press). Sejnowski, T. J. & Rosenberg, C. R. 1986. NETtalk: A parallel network that learns to read aloud, Johns Hopkins University Department of Electrical Engineering and Com- puter Science Technical Report 86/01. 184
1986
27
FORUM ON CONNECTIONISM Connectionist Models for Natural Language Processing David L. Waltz Thinking Machines Corporation 245 First Street Cambridge, MA 02142 and Program in Linguistics and Cognitive Science Brandeis University Brown 125 Waltham, MA 02254 PANELIST STATEMENT After an almost twenty year lull, there has been a dramatic upsurge of interest in massively parallel models for computation, descendants of perceptron and pandemonium models, now dubbed 'connectionist models.' Much of the connectionist research has focused on models for natural lan- guage processing. There have been three main reasons for this increase in interest: 1. Scientific adequacy of the models 2. The availability of fine-grained parallel hardware to run the models 3. The demonstration of powerful connectionist learning models. The scientific adequacy of models based on a small num- ber of coarse-grained primitives (e.g. conceptual dependency), popular in AI during the 70's, has been called into question and substantially replaced by a current emphasis in much of computational linguistics on lexicalist models (i.e., ones which use words for representing concepts or meanings). However, few people can doubt that words are too coarse, that they have structure and properties and features. Connectionist models offer very fine granularity; they can capture such detail in a manner that still allows for tractable computation. Such models also promise to make the integration of syntac- tic, semantic, pragmatic, and memory models simpler and more transparent. Fine-grained hardware, such as the Connection Machine, can allow models with millions of active elements, full vocabularies, and rapid throughput, as well as powerful near- term connectionist applications based on the use of associa- tive memory and hardware support for interprocessor com- munication. Meanwhile, connectionist learning models, such as the Boltzmann Machine and its descendant, the backward error propagation model, have demonstrated surprising power in learning concepts from example; as for instance in Sejnowski's NETtalk, which learned the pronunciation rules for English from examples. The future promises yet more surprising results as the concepts in even more radical models, such as Minsky's Society of Minds model, are digested and as new, even more powerful hardware becomes available. 185
1986
28
DONNELLAN'S DISTINCTION AND A COMPUTATIONAL MODEL OF REFERENCE Amichai Kronfeld Artificial Intelligence Center SRI International and 333 Ravenswood Avenue Menlo Park, CA 94025 kronfeld~sri-warbucks Center for the Study of Language and Information Stanford University Stanford, CA 94305 ABSTRACT In this paper, I describe how Donnellan's distinction between referential and attributive uses of definite descriptions should be represented in a computational model of reference. After briefly discussing the significance of Donnellan's distinction, I reinterpret it as being three-tiered, relating to object represen- tation, referring intentions, and choice of rehrring expression. I then present a cognitive model of referring, the components of which correspond to this analysis, and discuss the interac- tion that takes place among those components. Finally, the implementation of this model, now in progress, is described. INTRODUCTION It is widely acknowledged that Donnellan's distinction [7] be- tween referential and attributive uses of definite descriptions must be taken into account in any theory of reference. There is not yet agreement, however, as to where the distinction fits in a theoretical model of definite noun phrases. For Cohen [4], the intention that the hearer identify a referent consti- tutes a crucial difference between the referential and the at- tributive. Barwise and Perry [3], on the other hand, treat their value-loaded/value-free distinction as the central feature of the referential versus the attributive. However, as pointed out by Grosz et al. [9], this analysis ignores an essential aspect of Donnellan's distinction, namely, the speaker's ability, when using a description referentially, to refer to an object that is independent of the semantic denotation. The problem of determining the correct interpretation of Donnellan's distinction is of considerable importance. First, Donnellan's distinction seems to violate the principle that ref- erence to physical objects is achieved by virtue of the descrip- tive content of referring expressions. This principle can be found practically everywhere -- for example, in Frege's sense and reference, Rusell's theory of descriptions, and Searle's speech acts. In the referential use of definite descriptions, however, reference seems to be established independently of descriptive content. If I say ~The man over there with a glass of white wine is..., ~ I may be successful in my act of referring -- regardless of whether the person over there is a man or a woman, the glass is full of wine or grape juice, the color of the beverage is white or red, and so on. This, if accepted, has far-reaching consequences for the meaning of referring ex- pressions, for the logical structure of propositions, and for the theory of propositional attitudes. Second, the referential/attributive distinction forces us to reconsider the division between semantics and pragmatics. It seems that a speaker's intentions in using a referring expression do make a semantic difference. If I say ~Smith's murderer is insane," meaning that whoever murdered Smith is insane (the attributive case), what I say is true if and only if the one and only murderer is insane. If, on the other hand, my intention is to use the definite description referentially (referring to, say, Tom, who is accused of being the culprit), what I say is true if and only if Torn is indeed insane -- whether he is the mur- derer or not. Unless we understand the interaction between conventional meaning and a speaker's intentions in such cases, we cannot hope to construct an adequate model of referring and language use in general. Finally, Donnellan's distinction brings to the fore the role of identification in the speech act of referring. Both Strawson and Searle ([17,16]) attempted to analyze referring in terms of identification and identifying descriptions. But Donnellan has pointed to what seems to be a clear distinction between eases in which identification is required (referential use) and thor in which it is not (attributive use). This calls for a new analysis of the speech act of referring, one that does not rely on identification as a central concept, l In this paper, I present a general framework for treating Donnellan's distinction. In particular, I contend the following: 1. The apparent simplicity of the referential/attributive dis- tinction masks three aspects of the problem of reference. In a sense, it is not one distinction but three: the first has to do with representations of objects, the second -- with referring intentions, the third -- with the choice of referring expressions. 2. These three distinctions are independent of one another, and should be handled separately. Each is relevant to a different component of a plan-based model of reference: the data base, the planner, and the utterance generator, respectively. 3. Although the three distinctions are mutually independent, tThese comments, naturally, only touch the surface. For an extensive discussion of the significance of the referential/attributive distinction see my thesis [141. For a discussion of the role of identification in referring, see the paper coauthored by Appelt and me on this topic 12]. 186 they of course interact with one another. The notion of a conversationally relevant description provides a basis for explaining how the interaction operates. In the following sections, the three aspects are presented, their interactions discussed, and an initial attempt to achieve an implementation that takes them into account is described. CRITERIA How is the referential to be distinguished from the attributive? Two criteria are usually offered: 1. Even though, when used attributively, the description must denote the intended referent, in the referential use this is not necessary. 2. In the referential use, the speaker has a particular object in mind, whereas in the attributive he does not. These criteria have been taken to be equivalent: any use of a definite description that is referential according to one criterion should also be classified as referential according to the other (and similarly for the attributive use). However, the equivalence of the two criteria is really an illusion: some uses of definite descriptions are referential according to one criterion, but attributive according to the other. For example, let us suppose that John, a police investigator, finds Smith's murdered body, and that there are clear fingerprints on the murder weapon. Now consider John's utterance: "The man whose fingerprints these are, whoever he is, is insane." Note that John intended to speak of Smith's murderer, and he may very well have been successful in conveying his intended ref- erent, whether or not the fingerprints indeed belonged to the murderer. Hence, according to the first criterion, the descrip- tion, "The man whose fingerprints these are," was used refer- entially. On the other hand, John did not have any particular person in mind. Hence, according to the second criterion, the description must have been used attributively. Many, including Donnellan, regard the second criterion as the more significant one. But even this criterion is given two conflicting interpretations. On the one hand, ~having a par- ticular object in mind" is taken as an epistemic concept: this view holds that one can have a particular object in mind while referring only if one knows who or what the referent is. On the other hand, the criterion also receives what I call the modal interpretation. According to this reading, the referential use of a definite description is simply tantamount to employing the description as a rigid designator. Obviously, the two interpre. tations are not equivalent. As Kaplan demonstrates [lll, one can use a description as a rigid designator without having any idea who the referent is. Thus, there are three aspects of Dounellan's distinction that should be carefully separated. These aspects can be repre- sented in terms of three dichotomies: * Having knowledge of an object versus not having such knowledge (the epistemie distinction). ,, Using a description as a rigid designator versus using it as a nonrigid one (the modal distinction). s Using a definite description "the ~" to refer to whoever or whatever the ~ may be, versus using "the ~" to refer to an object z, whether or not z is indeed the ~ (the speech act distinction). THREE COMPONENTS The epistemic, modal, and speech act distinctions correspond to three components that a plan-based model of reference must possess, z Any such model must contain the following: 1. A database that includes representations of objects 2. A planner that constructs strategies for carrying out re- ferring intentions 3. An utterance generator that produces referring expres- sions Let us call these the database, the planner, and the utterance- generator, respectively. The next three sections describe a cog- nitive model of referring that incorporates these components. Object Representations Objects are represented to agents by terms. These terms are grouped into individuatin9 sets. An individuating set for an agent S is a maximal set of terms, all believed by S to be denoting the same object. For example, for John, the police investigator, the set {Smith'n murderer, the man who~e finger- prints these are} is an individuating set of Smith's murderer. The incredibly complex cluster of internal representations un- der which, for instance, John's mother would be represented to him is also an individuating set, although it would be im- practical to enumerate all the terms in this set. An individuating set is grounded if it contains either a per- ceptual term or a term that is the value of a function whose argument is a perceptual term. For example, a set containing the description "your father" is grounded, since it contains a terms that is the result of applying the function FATHER-OF to a perceptual term representing you. It should be emphasized that an individuating set is the result of the speaker's beliefs, not a mirror of what is actually the case. A speaker may possess two distinct individuating sets that, unbeknownst to him, determine the same object (e.g., Oedipus's representations of his mother and his wife). On the other hand, a speaker may possess an individuating set containing two or more terms that actually denote different objects. Moreover, the object that an agent believes to be denoted by the terms of some individuating set may not exist in the actual world. Whether or not an agent can have knowledge of the referent, or know who or what the referent is (the epistemic distinc- tion}, depends on the nature of the relevant individuating set. In a computational model, we can place a number of restric- tions on individuating sets to reflect various epistemological intuitions. For example, we may require that, for an agent to be able to manipulate an object, the relevant individuating set must contain a perceptual term, or that, for an agent to know eFor a plan-based model of referring, definite noun phrases, and speech acts in general, see articles by Appelt, Cohen, Cohen and Levesque, Cohen and Perrault ([1,4,.5,6]). 187 DISTINCTION INTERPRETATION Epistemic Type of individuating set Modal Type of referring intentions Speech act Choice of definite noun phrase I COMPONENT Database Planner Utterance generator Table 1: Donnellan's distinction, its interpretation[s), and the corresponding computational components. who a certain person is (relative to purpose P), the relevant individuating set must include a privileged term determined by P, or that, for an agent to have knowledge o fan object, the relevant individuating set must be grounded, and so on. Since individuating sets are part of the database, this is where the epistemlc distinction belongs. Referring Intentions A speaker may have two distinct types of referring intentions. First, he may select a particular term from the relevant indi- viduatlng set, and intend this term to be recognized by the hearer. Second, the speaker may intend to refer to the ob- ject determined by an individuating set, without intending any particular term from the set to be part of the proposition he wants to express. Consider, for example, the following two statements: 1 The author of Othello wrote the best play about jealousy. 2 Shakespeare was born in Stratford-upon.Avon. In making both statements, a speaker would normally be re- ferring to Shakespeare. But note the difference in referring intentions between the two: in the first statement, the speaker selects a particular aspect of Shakespeare, namely, the fact that he is the author of Othello, and intends the hearer to think of Shakespeare in terms of this aspect. In the second statement, the speaker does not select any particular aspect of Shakespeare from the relevant individuating set. Indeed, he may not care at all how the hearer makes the connection between the name "Shakespeare" and the referent. The two types of referring intentions yield two distinct types of propositions. When the speaker does not intend any par. ticular aspect of the referent to be recognized by the hearer, the proposition expressed in this way is singular, that is, it does not contain any individual concept of the referent. Con- sequently, the referring expression chosen by the speaker (be it a proper name, a demonstrative, or even a definite descrip- tion) is used as a rigid designator, which means that it picks out the same individual in all possible worlds where the ref- erent exists. On the other hand, if a particular aspect of the referent is meant to be recognized by the hearer, then the in- dividual concept corresponding to that aspect is part of the proposition expressed and should therefore be taken into ac- count in evaluating the truth value of what is said. Thus, it is the speaker's referring intentions that determine whether or not he will use a definite description as a rigid designator (the modal distinction). Since referring intentions are represented in the planner, this is where the modal distinction belongs. Note that the two types of referring intentions can be de- scribed as intentions to place constraints on the way the hearer will be thinking of the referent. In Appelt and Kronfeld [2], this is generalized to other referring intentions -- for example, the intention that the hearer identify the referent. Referring Expressions Once the speaker decides what his referring intentions are, he must choose an appropriate referring expression. Usually, if a particular aspect of the referent is important, a suitable definite description is employed; otherwise a proper name or a demonstrative may be more useful. However, such a neat correlation between types of referring expressions and referring intentions may not happen in practice. In any case, as we shall see in the next section, the speaker's choice of a referring expression constitutes an implicit decision as to whether the denotation of the referring expression must coincide with the intended referent (the speech act distinction). The choice of referring expression is naturally made within the utterance generator, where the speech act distinction is represented. By way of summary, Table I shows how Donnellan's distinc- tion, in its reinterpreted form, is related to a plan-based model of reference. RELEVANT DESCRIPTIONS Kripke and Searle [12,15] explain the referential use as a case in which speaker's reference is distinct from semantic refer- ence. This leaves an important question unanswered: why must speaker's reference and semantic reference coincide in the attributive use? s Sometimes two definite descriptions are equally useful for identifying the intended referent, yet cannot be substituted for each other in a speech act. The description employed, be- sides being useful for identification, has to be relevant in some other respect. Consider the utterance: "New York needs more policemen.* Instead of "New York," one might have used "The largest city in the U.S2 or "The Big Apple," but "The city hosting the 1986 ACL conference needs more policemen" won't do, even though this description might be as useful in identi- fying New York as the others. The latter statement simply conveys an unwarranted implication. As a generalization, we may say that there are two senses in which a definite description might be regarded as relevant. First, it has to be relevant for the purpose of letting the hearer know what the speaker is talking about. 4 A description that is relevant in this sense may be called functionally relevant. S~eond, as the example above indicates, a description might exhibit a type of relevance that is not merely a referring tool. ~As redefined by the ~pcech act distinction. 4Whether the hearer is also expected to identify the referent is a seps- r~te issue. 188 A description that is relevant in this noninstrumental sense might be called conversationally relevant. Every use of a definite description for the purpose of refer- ence has to be functionally relevant. But not every such use has to be conversationally relevant. If indicating the referent is the only intended purpose, any other functionally relevant description will do just as well. In other cases, the description is supposed to do more than just point out the intended referent to the hearer. Consider the following examples: 3 This happy man must have been drinking champagne. 4 The man who murdered Smith so brutally has to be insane. B The winner of this race will get $I0,000. In these examples, the speaker implicates (in Grice's sense} something that is not part of what he says. In (3), it is impli- cated that the man's happiness is due to his drinking. In (4), it is implicated that the main motivation for believing the mur- derer to be insane is that he committed such a brutal homicide. The implicature in (5) is that the only reason for giving the winner $10,000 is his victory in a particular race. In all these cases, what is implicated is some relationship between a spe- cific characteristic of the referent mentioned in the description and whatever is said about that referent. In such cases, it does matter what description is chosen, since the relevance is both functional and conversational. No other description, even if it identifies equally well, can be as successful in conveying the intended implicature. The conversationally relevant description may not be men- tioned explicitly, but rather inferred indirectly from the con- text. In the fingerprint example, the speaker uses the descrip- tion, The man whose fingerprints these are, but the conversa- tionally relevant description is nevertheless Smith's murderer. Thus, there are three general ways in which a speaker may employ a referring definite description: 1. If the discourse requires no conversationally relevant de- scription, any functionally relevant one will do. This cov- ers all standard examples of the referential use, in which the sole function of the definite description is to indicate an object to the hearer. 2. If a conversationally relevant description is needed, the speaker may do either of the following: (a) Use the description explicitly. This is what is done in standard examples of the attributive use. (b) Use a different, functionally relevant description. The speaker can do so, however, only if the context indicates the aspect of the referent that corresponds to the conversationally relevant description. This ex- plains the ambiguity of the fingerprint example. As the definite description uttered is only functionally relevant, its use appears to be referential. Yet, un- like the referential case, a conversationally relevant description is implied. In sum, when the description used is conversationally rel- evant, the speaker intends that the specific way he chose to do his referring should be taken into account in interpreting the speech act as a whole. Consequently, if the description fails, so does the entire speech act. On the other hand, if the description is only fimctionally relevant, the context may still supply enough information to identify the intended referent. INTERACTIONS When a speaker plans a speech act involving reference to an object, he must determine whether or not a conversationally relevant description is needed. However, the nature of the in- dividuating set, on the one hand, and constraints on choices of referring expressions, on the other, may influence the speaker's planning in various ways. For example, if the individuating set contains only one item, say, the shortest spy, the definite de- scription "the shortest spy" must be conversationally relevant. This is true both on formal and pragmatic grounds. From a formal standpoint, the description is conversationally rele- vant by default: no other functionally relevant description can be substituted because no such description is available. From a pragmatic standpoint, the description "the shortest spy" is very likely to be conversationally relevant in real discourse, simply because all we know about the referent is that he is the shortest spy. Thus, whatever we may have to say about that person is very likely to be related to the few facts contained in the description. Even if it is clear that a conversationally relevza~t description is needed for the speech act to succeed, constraints on choices of referring expressions may prevent the speaker from using this description. One such constraint results from the need to identify the referent for the hearer. If the conversationally relevant description is not suited for identification, a conflict arises. For example, in "John believes Smith's murderer to be insane," the speaker may be trying simultaneously to represent the content of John's belief and to identify for the hearer whom the belief is about. Sometimes it is impossible to accomplish both goals with one and the same description. IMPLEMENTATION This paper is part of an extensive analysis of the referen- tial/attributive distinction, which I use in the construction of a general model of reference [13]. My ultimate research objective is to provide s computational version of the reference model, then to incorporate it into a general plan-based account of def- inite and indefinite noun phrases. An experimental program that implements individuating ~ets has already been written. Called BERTRAND, this program interprets a small subset of English statements, and stores the information in its database, which it then uses to answer questions. Individuating sets are represented by an equivalence relation that holds among refer- ring expressions: two referring expressions, R1 and R2, belong to the same individuating set if, according to the information interpreted so far, RI and R 2 denote the same object. In con- strueting individuating sets, BERTRAND uses a combination of logical and pragmatic strategies. The logical strategy ex- ploits the fact that the relation "denote the same object" is symmetric, transitive, and closed under substitution. Thus, it 189 can be concluded that two referring expressions, RI and Rz, denote the same object (belong to the same individuating set) in one of the following ways: 5 1. Directly, when the statement "Rt is Rz ~ (or "R2 is RI ~) has been asserted. 2. Recursively using transitivity -- i.e., when, for a referring expression Rs, it can be shown that Rl and Rs, as well as Rs and Rz, belong to the same individuating set. 3. Recursively using substitution -- i.e., when Rl and Rz are identical, except that Rl contains a referring expression subRl exactly where Rz contains a referring expression subRz, and 8ubRl and subR2 belong to the same individ- uating set. Note that, in the logical strategy, it is tacitly assumed that the relation of denoting the same object always holds between two identical tokens of referring expressions. This is obviously too strong an assumption for any realistic discourse: for ex- ample, two utterances of "The man" may very well denote two different people. On the other hand, the logical strategy fails to capture cases in which it is implied (although never actu- ally asserted) that two distinct referring expressions denote the same thing. For example, "I met Marvin Maxwell yesterday. The man is utterly insane! ~ To compensate for these weaknesses, BERTRAND uses a strategy based on Grosz's notion of ffocus stack" [8,10]. In conceptual terms (and without going into details), it works as follows: a stack of individuating sets, representing objects that are "in focus," is maintained throughout the "conversation." When a new referring expression is interpreted, it is trans- formed into an open sentence D(z) with a single free variable z. s An individuating set I is said to subsume an open sentence S if S can be derived from I. The first individuating set in the focus stack to subsume D(z) represents the object denoted by the new referring expression. This solves the aforementioned problems: two occurrences of the same referring expression are considered as denoting the same object only if both are subsumed by the same individuating set in the focus stack, and two distinct referring expressions may still be considered as denoting the same object even though the logical strategy failed to show this, provided that both are subsumed by the same individuating set. Once the concept of an individuating set has been imple- mented, referring intentions can be represented as intentions to activate appropriate subsets of individuating sets. For ex- ample, the intention to use a conversationally relevant descrip- tion can be represented as the plan to activate a subset of an individuating set that contains the term associated with the description. This is the topic of a current joint research effort with D. Appelt [2] to investigate the interaction that takes place between individuating sets and Appelt's four types of SWhat belongs to an individuating set, of course, is not a referring expression but the logical structure associated with it. For the sake of simplicity, however, I do not make this distinction here. 6For example, ~The man from the city by the bay ~ is transformed into Man(a:)&From(z, Xi) where Xi is an "internal symbol" associated with Clty(y)&By(y,Xi) , and )(j is associated with Bay(z). concept activation actions [1]. The next stage in the devel- opment of BERTRAND -- the implementation of referring intentions -- will be based on this research. In the final stage, individuating sets and referring intentions will be used to gen- erate actual referring expressions. ACKNOWLEDGMENTS This research was supported by the National Science Founda- tion under Grant DCR-8407238. I am very grateful to Doug Appelt and Barbara Grosz for detailed comments on earlier drafts, as well as to memhers of the Discourse, Intention and Action seminar at the Center for the Study of Language and Information for stimulating discussions of related issues. REFERENCES [1] Douglas E. Appelt. Some pragmatic issues in the planning of definite and indefinite noun phrases. In Proceedings of the £Srd Annual Meeting, Association for Computational Linguistics, 1985. [2] Douglas E. Appelt and Amichai Kronfeld. Toward a model of referring and referent identification. Forthcom- ing. Submitted to the AAAI convention, Philadelphia, August 1986. [3] Jon Barwise and John Perry. Situations and Attitudes. The Massachsetts Institute of Technology Press, Cam- bridge, Massachusetts, 1983. [4] Philip R. Cohen. Referring as requesting. In Proceedings of the Tenth International Conference on Computational Linguistics, pages 207-211, 1984. [5] Philip R. Cohen and Hector Levesque. Speech acts and the recognition of shared plans. In Proceedings of the Third Biennial Conference, Canadian Society for Com- putational Studies of Intelligence, 1980. [6] Philip R. Cohen and C. Raymond Perranlt. Elements of a plan-based theory of speech acts. Cognitive Science, 3:117-212, 1979. [7] Kieth S. Donnellan. Reference and definite description. Philiosophicai Review, 75:281-304, 1966. [8] Barbara J. Grosz. Focusing and description in natural language dialogues. In A. Joshi, I. Sag, and B. Webber, editors, Elements of Discourse Understanding, pages 85- 105, Cambridge University Press, Cambridge, England, 1980. [9] Barbara J. Grosz, A. Joshi, and S. Weinstein. Provid- ing a unified account of definite noun phrases in dis- course. In Proceedings of the Twenty-first Annual Meet- ing, pages 44-50, Association for Computational Linguis- tics, 1983. [10] Barbara J. Grosz and Candace L. Sidner. Discourse struc- ture and the proper treatment of interruptions. In Pro- eeedings of the Ninth International Joint Conference on Artificial lntellignece, pages 832-839, 1985. 190 [11] David Kaplan. Dthat. In Peter Cole, editor, Syntaz and Semantics, Volume 9, Academic Press, New York, New York, 1978. [12] Saul Kripke. Speaker reference and semantic reference. In French et al., editor, Contemporary Perspectives in the Philosophy of Language, University of Minnesota Press, Minneapolis, Minnesota, 1977. [13] Amichai Kronfeld. Reference and Denotation: The De- scriptive Model. Technical Note 368, SRI International Artificial Intelligence Center, 1985. [14] Amichai Kronfeld. The Referential Attributive Distinc- tion and the Conceptual-Descriptive Theory of Reference. PhD thesis, University of California, Berkeley, 1981. [15] John Searle. Referential and attributive. In Ezpression and Meaning: Studies in the Theory of Speech Acts, Cam- bridge University Press, Cambridge, England, 1979. [16] John Searle. Speech Acts: An Essay in the Philosophy of Language. Cambridge University Press, Cambridge, England, 1969. [17] Peter F. Strawson. On referring. In J.F Rosenberg and C. Travis, editors, Readin 9 in the Philosophy of Language, Prentice Hall, Englewood, New Jersey, 1971. 191
1986
29
Time and Tense in English Mary P. Harper and Eugene Charniak Brown University Department of Computer Science Box 1910 Providence, RI 02912 Abstract Tense, temporal adverbs, and temporal connectives provide information about when events described in English sentences occur. To extract this temporal information from a sentence, it must be parsed into a semantic representation which captures the meaning of tense, temporal adverbs, and temporal connectives. Representations were developed for the basic tenses, some temporal adverbs, as well as some of the temporal connectives. Five criteria were suggested for judging these representations, and based on these criteria the representations were judged. Introduction English sentences contain many types of temporal information. Tense is used to inform the reader (listener) of when the event associated with the main verb occurs with repect to the time of utterance. That is, tense informs the reader that an event occurs before, after, or during the time of utterance. Temporal adverbs (such as tomorrow or now) add additional information about the events in a sentence. Temporal connectives tell the reader about the temporal relationship between the events in the main clause and the events in the subordinate clause. While there is other temporal information that can be found in sentences, the following will concentrate on these three. To extract temporal information from a sentence, it must be parsed into a semantic representation which captures the meaning of tense, temporal adverbs, and temporal connectives. A temporal representation of tense, adverbs, and temporal connectives must : 1. provide a way to reject temporally incorrect sentences, such as * "I will run yesterday." 2. allow one to reason about the temporal relationship between events. For instance, the sentence "I had run when he arrived" implies that the run event occurs before the arrival, whereas in the sentence "I was running when he arrived," the arrival and run events overlap. 3. allow the exact time of event to be unfixed until it is pinpointed based on contextual information or adverbial modification. 4. allow reference to points and intervals of time (eg. precisely at 3 PM VS. for 5 hours). This work has been supported in part by the National Science Foundation under grants IST 8416034 and IST 8515005, and Office of Naval Research under grant N00014-79-C-0529. 5. allow parsing of temporal information in sentences to be simple and compositional. These criteria were used to judge previous temporal representation research (Bruce (1972), Hornstein (1977, 1981), Yip (1985)). None fulfilled all five criteria. The criteria will also be used to judge the representations developed here. Tense The representations for tense, adverbs, and temporal connectives developed here is based on McDermott's (1982) temporal logic. McDermott's "point-based" temporal logic was chosen because it is not unusual to talk about the beginning and end points of a period of time or an event. In fact, the semantics of tense developed here relate the endpoints of events in sentences. This representation of tense provides a flexibility not found in many other representations of tense (eg. (Hornstein, 1977,1981)). Flexibility is important since events can extend over tense boundaries (for instance, "In 3 minutes, the boy will have run for 24 hours."). Any representation of events in time must capture the fact that some events do not always wholly occur in the past, present, or future with respect to the time of utterance. The tense rules are compositional and require the following relations : < (before), > (after), = (cotemporaneous), < (before or cotemporaneous), and -> (after or cotemporaneous). It is assumed that events are "unit" events and have a beginning and an end point, where the beginning of an event is before or simultaneous to its end point. The endpoint of an event need not imply the achievement of the purpose with which the event was initiated (eg. the existence of the end point of a winning event need not imply that the state of having won is achieved). To capture the meaning of simple as well as more complex tenses, we introduce the following events : 1. ~ - This is simply the speaking event associated with a sentence. 2. ~ - This is the event indicated by the main verb of the sentence. For instance, the run event in the following sentence is the main event : "I have been running to the store." 3. ~ - This is the time interval referred to "Bill in sentences like : had eaten at 3 PM," which describes an eat event in the "distant past." This sentence implies the existence of an event or time interval which occurs after the main event (eat) but before the utterance event. 4. Proeressive Event - This is the time interval from which the main event extends into the past and into the future. The progressive event may have no correlation with a "real world" event, but its existence predicts certain phenomena in our model of temporal adverbs and connectives. It can be thought of as a place holder, or the minimal possible duration of a main event with progressive aspect. The following five rules describe the semantics of tense both in English and in our representation. The verbs in a sentence are parsed left to right (assuming an ATN, which is the parser in which these tense rules were implemented). One of the following three rules is triggered by the tense of the first verb in the sentence. "Event" (in the fast three rules) can be a main event, a perfect event, or a progressive event depending on the sentence. 1. Past rule : This rule implies that there exists some event that must end before the beginning of the utterance event. (< (end even0 (begin utterance-even0) 2. ~ : This rule implies that there exists some event that is either cotemporaneous with the utterance event or can begin at or after the beginning of the utterance event. Which is asserted seems to depend on the aspect of the verb associated with event. If the current verb is stative then (and (= (begin event) (begin uterance-event)) (= (end event) (end uterance-event))) If the current verb is not a stative then ('d (begin event) (begin uterance-event)) 3. ~ : This rule implies that there exists some event that must begin after the end of the utterance event. (> (begin event) (end utterance-event)) The following rules are required to interpret the more complicated perfect and progressive tenses. 4. ~ : This rule is triggered by the word have followed by a past participle. The event in the rule can be a progressive or a main event. (< (end event) (begin perfect-event)) 5. ~ : This rule is triggered by the word be followed by a progressive verb form. The event in the rule can only be a main event. (and (< (begin main-event) (begin progressive-event)) (end main-event) (end progressive-event))) These rules combine in a compositional way to define the more complicated tenses. For instance the past perfect progressive tense combines the past rule with the perfect and progressive rules. Thus the sentence "Jack had been running" is represented as follows : (and (hast utterance6 utterance-event) (< (end have2) (begin utterance6)) ; past rule (inst have2 perfect-event) (<= (end be3) (begin have2)) ; perfect rule (inst be3 progressive-event) (inst run64 run) (<= (begin run64) (begin be3)) ; progressive rule (>= (end run64) (end be3)) (hast run64 main-event) (name Jack 16 Jack) C = '(agent run64) Jackl6)) A "temporal" picture can be drawn for this sentence (see Figure 1). Note that the picture is only one possible depiction of the actual meaning of this representation. utterea'~e6 I I h~ve2 I I I be3 I lam64 I I ( I I ) l~t ~w futt~ Figure 1. "Jack had been running." A parser uses the semantic rules of tense as follows. After checking the tense of the first verb, the parser checks to see if the verb is the word will. If it is, then move to the next verb and mark the event associated with this verb as a future event. Assert either the past, present or future rule depending on the tense associated with the "event" of the current verb. Now check to see ff the current verb is have followed by a past participle. If so, then assert the perfect rule relating the perfect event (the event associated with have) and the event associated with the verb to the right of have, and move to that verb. After checking for perfect tense, the parser looks for a form of the word be followed by the progressive form of a verb. This signals the progressive rule, which relates the progressive event with the main event. The representation adopted has some support in linguistic literature, and there are some similarities to the representations developed by Bruce (1972), Hornstein (1977, 1981), Reichenbach (1947), and Yip (1985), although there are many differences. One difference between this representation and previous representations of tense is how present tense is defined, All past theorists have considered present tense as indicating that the main event is cotemporaneous with the time of utterance. However, aspect of verb seems to affect the meaning of present tense. In present tense sentences, there exists a curious phenomenon which can best be understood by examining the following two sentences : 1. I leave at eight o'clock tomorrow. 2. *I have a dog tomorrow. Aspect interacts with present tense requiring a more complicated present rule in a theory of tense. Adverbials The representation of several types of temporal adverbs will be considered, as well as how the meaning of these adverbs combines with the meaning of the tense. As in the representation of tense, we require the following relations : <, >, ~, >, and =. We will consider how to predict incorrect combinations of tense and adverbs based on the representations of tense and adverbs developed here. As suggested by Homstein (1977), we adopt the idea that which event is modified by an adverbial is an important issue (since we introduce multiple events in our definition of some of the basic tenses). The ambiguity concerning which event is modified can best be seen in the following example: "I had eaten at 3." This sentence has a utterance event, which can not be directly modified by an adverb. It can be modified by context, and it can be modified when some event which is cotemporaneous to the utterance event is modified. The past perfect sentence introduces a perfect event and a main event (eat) in addition to the utterance event. If we assume that the main event is modified, then the time of"eating" must overlap 3 o'clock. If it modfies the perfect event, then by the time 3 o'clock came around the "eating" was complete. In general, we adopt the idea that which event is modified is ambiguous, and thus a disjunction of possibilities is asserted. Since Hornstein (1977) and Yip (1985) examined the three adverbials tomorrow, yesterday, and now, we will concentrate on these three. Each of these adverbs shares the fact that they are defined with respect to the time of the utterance event (today is also included in this category of adverbs though not discussed here). The representations of now, tomorrow, and yesterday follow : Now : Now is defined to be a time interval which is cotemporaneous with the utterance event. Thus, the representation of some specific now is : (and (inst nowl6 time-interval) (= (begin howl6) (begin utterance2)) (= (end now 16) (end utterance2))) Tomorrow : Tomorrow is also defined with respect to the time of utterance. Notice that the duration of tomorrow is precisely 24 hours (as indicated in the fourth conjunct). (and (inst tomorrow3 day) (> (begin tomorow3) (end utterance2)) (< (begin tomorrow3) (+ (end utterance2)(* 24 hour))) (= (- (end tomorrow3) (begin tomorrow3)) (* 24 hour))) Yesterday : Yesterday is defined with respect to the time of utterance, and has a 24 hour duration. (and (inst yesterday3 day) (< (end yesterday3) (begin utterance2)) (> (End yesterday3) (- (begin utterance2) (* 24 hour))) (= (- (end yesterday3) (begin yesterday3)) (* 24 hour))) To satisfy criterion 1, this model should be able to predict temporal inconsistencies between temporal adverbs and tense. Any event in a sentence can be modified by an adverb if the event can potentially overlap the period of time associated with the adverb. Thus we introduce the overlap rule of adverb-tense agreement : Overlap Rule : An event can be modified by a temporal adverb iff the time period associated with an adverb can overlap the time period associated with the event without some temporal contradiction. That is, if the following assertion does not contradict other temporal assertions associated with the sentence, then the events can overlap : (and (< (begin event) (end adverb)) ('d (end event) (begin adverb))) Because events are defined flexibly in this tense representation, some events can cross tense boundaries. For correct adverb-tense agreement, the events in the sentence must be "anchored" to the event associated with the first verb in the sentence, that is the event that determines the tense of the sentence (note that will has no event associated with it). The need for this anchoring can best be shown with following examples : *Now, he will have eaten. (excluding modal reading) *Yesterday, he will have eaten. (excluding modal reading) Tomorrow, he will have eaten. The tense stucture of each of these sentences (as given by our tense rules) introduces three events, an utterance event, a perfect event, and a main event. Notice that the only event that is necessarily in the future is the perfect event. The main event could overlap yesterday or now, as well as tomorrow. Thus it would seem that given that the main event can be modified by yesterday or now, the first two sentences should be correct, However, except for possible modal readings, these sentences are not acceptable. We account for this with the following rule : Anchoring rule : If the time period of the event associated with the first verb of a sentence can overlap the time period associated with an adverb, then the adverb can modify that event and can potentially modify the other events in the sentence (based on the overlap rule). The utterance event can not be modified using the anchoring rule. To show how these two rules (anchoring and overlap) are used, examine the sentence: "He is running now." Step I : Get the basic representations of the adverbial and the tense. (and (inst utterance6 utterance-event) ; adverb representation (inst now5 time-interval) (= (begin now5) (begin utterance6)) (= (end now5) (end utterance6)) ; tense representation (inst bel progressive-event) (= (begin bel) (begin utterance6)) (= (end bel) (end utterance6)) (inst run4 run) (inst run4 main-event) (< (begin run4) (begin bel)) Cd- (end run4) (end bel))) Step 2 : Check to see ff the anchor event can overlap the adverb. Assume that CHECK is a function that returns true if the overlap is possible. Since Bel and Now5 occur at the same time, the result of the test is true. (CHECK (and (< (begin bel) (end now5)) (> (end bel) (begin now5)))) Step 3 : If the overlap check of the anchor returns true, then do overlap checks on the remaining events. For those that return true, assert a disjunction of ways that the adverb can modify the events. In this case assert : (or (and (< (begin bel) (end nowS)) (end bel) (begin nowS))) (and (< (begin run4) (end now5)) (> (end run4) (begin now5)))) An example of a sentence in which the anchor event and the adverb can not overlap is *"He ran tomorrow." The Tense-Adverb Compatibility Now Table I. Yes~rday ok ok or Ping. Rule* ok error error en'Dr error en'Dr errDr P~t Past Progressive Past Perfect Present Present Progressive Present Perfect Future Futu~ Progressive Future Perfect Tomorrow error error Ping. Rule only Prvg. Rule only error error ok ok or error ok Ping. Rule only ok error error ok error ok or Prvg. Rule error ok * Reference ~o "Ping. Rule" refers ~ a modification of the Pest Progressive Rule sugges~d by Hon~in (1977), wkich is ~nored in this paper. run event can not overlap tomorrow (because the run event ends in the past and tomorrow begins in the future), and the sentence is therefore reported as erroneous. See Table 1 for the adverb-tense predictions of our model. Modal readings are ignored in this paper. There are other adverbials which are interpreted relative to the time of utterance (for instance, this week, next week, and last year). It is not difficult to imagine how to represent these adverbials. There are also some adverbials which need not be defined relative to the time of utterance. These include all of the clock calendar adverbials, such as Sunday and midnight. For example the representation of a specific Sunday is : (and (hast sunday3 day) (= (- (end sunday3) (begin sunday3)) (* 24 hour))) Sunday3 can not be placed in the past, present, or future. However, when Sunday is used in a sentence, we can determine whether we mean a past, present, or future Sunday. Durational adverbials can also be easily represented (somewhat like the definition of Sunday). There are other adverbials which like clock calendar adverbials are not interpretted with respect to the time of speech. One such temporal adverb is just. This adverb is distinguished from the word just, meaning only. To see how it is used, examine the following sentences : 1. I just ate lunch. 2. I was just eating lunch. 3. I had just eaten lunch. 4. * I just eat. 5. I am just eating lunch. 6. I have just eaten lunch. 7. * I will just eat lunch. 8. I will be just eating lunch. 9. I will have just eaten lunch. Notice that just can not be used in simple present or simple future tense. This adverb requires the existence of some event in the sentence that begins immediately after the start of the event modified by just. Sentences 5 and 8 require progressive events to represent their tense structure. This tense representation allows our model to predict the correctness of these two sentences. The definition of just follows : Just : Just relates two events, where Evl can be the main event, the progressive-event, or the perfect-event, and Ev2 can be the utterance-event, the perfect-event, or the progressive-event. Evl and Ev2 must not be separated by another event introduced by the sentence. 0 is some small value which is determined by context. (< (begin Evl) (begin Ev2)) if (< fEnd Evl) (begin Ev2)) then assert (< (- (begin Ev2) (End Evl)) 0) else assert (< (- (begin Ev2) (begin Evl)) ~) There are many other temporal adverbials that need to be represented, among them recently, afterwards, earlier, lately, already, and soon. Most of these relate two events, in much the same way as temporal connectives which will be our next topic. Temporal Connectives A few issues must be examined before we present our representation of temporal connectives. First it should be pointed out that temporal connectives are subordinators. Most subordinators do not restrict the tense of the subordinate clause given the tense of the main clause. The tense of the main clause does restrict the tense of the subordinate clause when the subordinator is a temporal connective. The following results are predicted by Hornstein (1977) : John left when Harry 1. *arrives. 4. arrived. 7. *will come. 2. *is arriving. 5. was arriving. 8. *will be coming. 3. *has arrived. 6. had come. 9. *will have arived. By studying the above example, one might suggest that the tense of the main clause and the tense of the subordinate clause must have the same tense (disregarding progressive and perfect aspects). This seems to be true for all past and present tenses. There are some restrictions of this statement, however, since the will/shall construction of future tense is not allowed in temporal subordinate clauses. As pointed out by l.~ech (1971) : "In dependent clauses inlxoduced by conditional and temporal conjunctions if, unless, when, as soon as, as, etc., the future is denoted by the ordinary Present Tense instead of the construction with will~shall : I'll tell you if it hurts. When the spring comes, the swallows will return. Jeeves will announce [he guests as they arrive." (p.59) If the will~shall construction is used in a subordinate clause introduced by a temporal connective, then the reading of the sentence is not a future but a modal reading. This fact was not noticed by Hornstein (1977, 1981) or Yip (1985). Hornstein allows both present tense and will~shall future tense to occur in temporal subordinate clauses. Yip only allows the will~shall future tense to occur in the subordinate clause 1. Rather than include the syntactic needs of temporal connectives in our semantic representation, it seems wiser to include the requirement at a syntactic level. That is the tense of the f'trst verb of the main clause restricts the tense of the first verb in the temporal subordinate clause. If the tense of the first verb in the main clause of the sentence is past or present, then the tense of the first verb in the subordinate clause must have like tense. If the tense of the first verb in the main clause is future tense, then the tense of the fhst verb in the subordinate clause must be present tense (though it will be semantically interpretted as future tense). Now, we must consider how to extract the temporal meaning of sentences of the form sentence-temporal connective-sentence. Each clause will be given a temporal representation as indicated in the tense representation, section of this paper. Both clauses will have the same time of utterance, since an utterance event is created only for a sentence. The only subtlety is the requirement that present tense in a subordinate clause be interpretted using future semantics when the main clause has future tense. After each clause is represented, the semantics for the temporal connective must be invoked. Each temporal connective requires its own definition, as pointed out by Hornstein(1977). These definitions will determine the temporal relationship between the events in the main clause and the events in the subordinate clause. We will present the definitions for five temporal connectives : when, while, until, before, and after. Because these definitions can use the representation of tense associated with each clause in a sentence to interrelate the events between clauses, the strength of the tense representation is increased. When : align the anchor events to determine the relationship between events of the clauses. If the main events of both clauses are the anchor events, then the events may occur at exactly the same time, though not necessarily. (and (= (begin anchor-event(main-clause)) (begin anchor-event(subordinate-clause))) (= (end anchor-event(main-clause)) (end anchor-event(subordinate-clause)))) While : align the anchor and main events of the clauses. Check to see if the alignment of both is possible. If check .-etums false then reject the sentence. 1. Yip(1985) and Hornstein(1977) try to deal with this temporal connective phenomenon and adverb-tense agreement with a unified theory. Hornstein's theory accepts sentences of the form *"I have eaten tomorrow" so that the sentence "I will leave when he has eaten" is acceptable. Yip modifies Hornstein's theory to get rid of the yesterday-present perfect error, but the modification does not allow a future tense main clause to have a present tense subordinate clause. (and (= (begin anchor-event(main-clause)) (begin anchor-event(subordinate-clause))) (= (end anchor-event(main-clause)) (end anchor-event(subordinate-clause))) (= (begin main-event(main-clause)) (begin main-event(subordinate-clause))) (= (end main-event(main-clause)) (end main-event(subordinate-clause)))) Until : requires in most cases that the main event of the main clause end when the the main event of the subordinate clause begins. If the tense representation of the subordinate clause has a perfect event and no progressive event, then the main event of the main clause must end when the main event of the subordinate clause ends. If subordinate clause has a perfect but no progressive event (= (end main-event(main-clause)) (end main-event(subordinate-clause))) Else (= (end main-event(main-clause)) (begin main-event(subordinate-clause))) Before : requires that the anchor event of the main clause end before the beginning of the main event of the subordinate clause. (< (end anchor-event(main-clause)) (begin main-event(subordinate-clause))) After : requires in most cases that the main event of the main clause begin after the end of the anchor event of the subordinate clause. If the main clause has a progressive event, then the anchor event of the main clause begins after the end of the anchor event of the subordinate clause and the main event of the subordinate clause ends before the end of the main event of the main clause. If main clause has a progressive event then (and (< (end anchor-event(subordinate-clause)) (begin anchor-event(main-clause))) (< (end main-event (subordinate-clause)) (end main-event (main-clause)))) Else (< (end anchor-event(subordinate-clause)) (begin main-event(main-clause))) Notice that before and after are not always inverses of one another. Consider the following two sentences : 1. I ate before he was running 2. He was running after I ate. If before and after were inverses, then sentence 1 and 2 would have equivalent meanings which they do not. The definitions of before and after capture this assymetry. Two Examples are presented to acquaint the reader with the representation of sentences joined by temporal connectives. The fLrst is : "Mary ate when Joe was eating." I. Represent the clauses. (and (inst utterance3 utterance-even0 ; "Mary ate" (inst eat22 ca0 (inst eat22 main-event) (< (end eat22) (begin utterance3)) (name Mary22 Mary) (:= '(agent eat22) Mary22) ;"Joe was eating" (< (end beA) (begin utterance3)) (inst be4 progressive-event) Onst eat23 eat) (hast cat23 main-event) (< (begin eat23) (begin be.A)) (end cat23) (end beA)) (name Joel2 Joe) C = '(agent eat23) Joel2)) IL Do semantics for when. Note that the anchor event for the main clause is eat22, and the anchor event for the subordinate clause is beA. (and (= (begin cat22) (begin be4)) (= (end eat22) (end be4))) This sentence can depicted as follows (see Figure 2) : ea122 I be4 I 4 ea123 I I •u•tenmce3 • ! past nov future Figure 2. "Mary ate vhen Joe vas eating." This implies that eat23 can begin before and end after eat22, though they could be exactly coincident. This seems to be the desired interpretation of this sentence. This is not the meaning that Hornstein's model would give this sentence. Yip(1985) introduces progressive aspect rules to Hornstein's tense rules to get exactly this result. The second example consists of an analysis of the sentence : "Mary ate when he had eaten." I. Represent the clauses. (and (hast utterance3 utterance-event) ; '~Mary ate" representation (inst cat22 cat) (inst cat22 main-event) (< (end cat22) (begin utterance3)) (name Mary22 Mary) C = '(agent cat22) Mary22) ; "He had eaten" representation (< (end have3) (begin utterance3)) (inst have3 perfect-event) (hast cat23 eat) (hast eat23 main-event) (~ (end cat23) (begin have3)) (inst Jackl 2 Jack) C = '(agent cat23) Jackl2)) II. Do semantics for when. Note that the anchor event for the main clause is cat22, and the anchor event for the subordinate clause is have3. (and (= (begin cat22) (begin have3)) (= (end eat22) (end have3))) This sentence can be depicted as shown in Figure 3. Thus, it can be seen that eat23 must end by the beginning of eat22, This seems to be the correct interpretation of this sentence, and was exactly the interpretation that Hornstein's when rule makes. These two examples show that the when rule predicts very different relationships between events depending on the tenses in the clauses. u~emnee3 , ea'~2 I ea~3 I past now future Figure 3. "Mary ate vhen Jack had eaten " Conclusion This paper describes a preliminary study of the temporal phenomena found in English sentences. Many issues have been ignored for simplicity. For instance, the issue of habitual readings of verbs was not examined. The meanings of verbs with temporal aspects (such as plan ) were also not considered. In addition, we did not consider how to relate (in time) events from different sentences. The only events from different sentences that can be related are the utterance events. If two sentences occur in sequence, one can conclude only that the utterance event of the In'st ends before the utterance event of the second. The model developed here can, however, temporally order events within a sentence. Five criteria were suggested at the beginning of the paper for the representation of temporal information found in an English sentence. These criteria guided the development of our model. All criteria were met, except the compositional parse criterion in a few cases. There seem to be unavoidable special cases which can not be captured in compositional tense, adverb, and temporal connective rules. For instance, the meanings of some adverbs require tense information to determine their correct representations (e.g. just). References Allen, James. Maintaining Knowledge About Temporal Intervals. CACM, 1983, 26, 832-843. Bruce, Bertram C. A Model for Temporal References and Its Application in a Question Answering Program. Artificial Intelligence, 1972, 3, 1-25. Charuiak, E., Gavin, M., and Hendler, J. The Frail/Nasl Reference Manual. Brown University Technical Report CS-83-06, 1983. Charniak, E. and McDermott, D. Introduction to Artificial Intelligence. Reading, MA : Addison-Wesley Publishing Company, 1985. Hornstein, Norbert. Towards a Theory of Tense. Linguistic Inquiry, 1977, 8, 521-557. Hornstein, Norbert. The Study of Meaning in Natural Language, In N. Hornstein & D. Lightfoot (Eds.), Explanation in Linguistics. New York : Longman, 1981. Leech, Geoffrey N. Meaning and the English Verb. London : Longman, 1971. McDermott, Drew. A Temporal Logic For Reasoning About Processes And Plans. Cognitive Science, 1982, 6, 101-155. Reichenbach, Hans. Elements of Symbolic Logic. New York : MacMillan, 1947. Yip, Kenneth M. Tense, Aspect and the Cognitive Representation of Time. IJCAI Proceedings, 1985, 806-814.
1986
3
The detection and representation of ambiguities of intension and description Brenda Fawcett and Graeme Hirst Department of Computer Science University of Toronto Toronto, Ontario CANADA M5S 1A4 Abstract Ambiguities related to intension and their consequent inference failures are a diverse group, both syntacti- cally and semantically. One particular kind of ambi- guity that has received little attention so far is whether it is the speaker or the third party to whom a description in an opaque third-party attitude report should be attributed. The different readings lead to different inferences in a system modeling the beliefs of external agents. We propose that a unified approach to the representation of the alternative readings of intension-related ambiguities can be based on the notion of a descriptor that is evaluated with respect to intensionality, the beliefs of agents, and a time of application. We describe such a representation, built on a standard modal logic, and show how it may be used in conjunction with a knowledge base of back- ground assumptions to license restricted substitution of equals in opaque contexts. 1. Introduction Certain problems of ambiguity and inference failure in opaque contexts are well known, opaque contexts being those in which an expression can denote its intension or underlying concept rather than any par- ticular extension or instance. For example, (1) admits two readings: (1) Nadia is advertising for a penguin with whom she could have a long-term meaningful relationship. On the transparent (or extensional or de re) reading, there is some particular penguin that Nadia is after: (2) Nudia is advertising for a penguin with whom she could have a long-term meaningful relationship, whom she met at a singles bar last week and fell madly in love with, but lost the phone number of. On the opaque (or intensional or de dicto) reading, Nadia wants any entity that meets her criteria: (3) Nadia is advertising for any penguin with whom she could have a long-term meaningful relation- ship. On this reading, the rule of existential generalization fails; that is, we cannot infer from (3), as we could from (2), that: (4) There exists a penguin with whom Nudia could have a long-term meaningful relationship Another rule of inference that fails in opaque con- texts is substitution of equals; (5) and (6) do not per- mit the conclusion (7): (5) Nadia believes that the number of penguins cam- paigning for Greenpeace is twenty-two. (6) The number of penguins campaigning for Green- peace is forty-eight. (7) =/~ Therefore, Nadia believes that forty-eight is twenty-two. Although these facts are familiar, little research has been done on how a practical NLU system can detect and resolve intensional ambiguities (which can occur in many constructions besides the 'standard' examples; see Fodor 1980, Fawcett 1985), and con- trol its inference accordingly. The same is true of certain other complications of opaque contexts that are of special relevance to systems that use explicit representations of knowledge and belief. In particu- lar, the interaction between intensional ambiguities 192 and the beliefs of agents has not been studied. The present work is a first step towards rectifying this. 2. Attributing descriptions Previous linguistic systems that dealt with opaque contexts, such as that of Montague (1973), have taken a God's-eye view, in the sense that the speaker and listener are assumed to have perfect knowledge, as are, in certain ways, the people of whom they speak. No account is taken of the limits of the knowledge or beliefs of the agents involved. To see that beliefs are a complicating factor, con- sider the following sentence, usually considered to be two ways ambiguous -- transparent or opaque: (8) Nadia wants a dog like Ross's. These ambiguities, however, cross with an ambiguity as to which agent the description a dog like Ross's is to be attributed: to the speaker, or to Nadia (the agent of the verb of the sentence). This gives a total of four possible readings. To see the four cases, con- sider the following situations, all of which can be summarized by (8): (9) Transparent reading, agent's description: Nadia sees a dog in the pet store window. "I'd like that dog," she says, "It's just like Ross's." The speaker of (8), who need not be familiar with Ross's dog, reports this. (10) Transparent reading, speaker's description: Nadia sees an animal in the pet store window. "I'd like that," she says. Nadia is not aware of it, but the animal is a dog just like the one Ross owns. The speaker of (8), however, knows Ross's dog (and believes that the listener also does). (11) Opaque reading, agent's description: Nadia feels that her life will be incomplete until she obtains a dog. "And the dog that would be perfect for me," she says, "Is one just like the one that Ross has." The speaker of (8), not neces- sarily familiar with Ross's dog, reports this. (12) Opaque reading, speaker's description: Nadia feels that her life will be incomplete until she obtains a pet. "And the pet that would be perfect for me," she says, "Is a big white shaggy dog, with hair over its eyes." N~lia is not aware of it, but Ross owns a dog just like the one she desires. The speaker of (8), however, knows Ross's dog (and believes that the listener also does). The agent's-description readings permit the inference that Nadia believes that she (either intensionally or extensionally) wants a dog like Ross's; the other readings do not. Making the distinction is thus cru- cial for any system that reasons about the beliefs of other agents, such systems being an area of much current concern in artificial intelligence (e.g., Levesque 1983, Fagin and Halpern 1985). Another complicating factor is the time at which a description is to be applied. The above readings assumed that this was the time of the utterance. The intensional readings, however could be referring to the dog that Ross will get or (not included in the examples below) once had: (13) Opaque reading, agent's description, future appli- cation: Nadia has heard that Ross will buy a dog. Want- ing one herself, and trusting Ross's taste in can- ines, she resolves to buy whatever kind he buys. (14) Opaque reading, speaker's description, future applic atio n: Nadia finds English sheepdogs attractive, but none axe available. She therefore intends to pur- chase some other suitably sized dog and spend her weekend gluing long shaggy hair onto it. Nadia is not aware of it, but Ross owns a dog just like the one she wants to end up with. The speaker, knowing Ross's dog, can describe Nadia's desire as that of having an object that will at some future time be describable as a dog like Ross's. The description in an intensional reading may also be used to refer to different entities at different times. (15) Opaque reading, agent's description, repeated application: Ross buys a new type of dog every year or so. Desperately wanting to keep up with canine fashion, Nadia declares her intent to copy him. Whatever dog Ross has at any given time, Nadia wants to have the same kind. We have not been able to find an example in which repeated application of the speaker's description gives a natural reading. Extensional readings always seem to refer to the present time. 2 Thus, there are at 2It may be objected that an extensional future-application reading is also possible. This would be like (14), except that Nadia has some particular dog in mind for the cosmetic alterations. If we al- low Nadia to use this method repeatedly upon a particular dog, then an extensional reading corresponding to (15) would be 193 least seven readings for Nadia wants a dog like Ross 's. 3 3. Other intensional ambiguities and inference failures There are other kinds of intension-related inference failures besides those mentioned in the previous sec- tions. For example, some opaque contexts forbid inferences from postmodifier deletion, while others permit it. Both readings of (16) entail the less specific (17) (which preserves the ambiguity of (16)): (16) Nadia is advertising for a penguin that she hasn't already met. (17) Nudia is advertising for a penguin. However, the same cannot be done with (18): (18) Nadia would hate for there to be a penguin that she hasn't already met. (19) =]~Nazlia would hate for there to be a penguin. 4 The examples above have all involved explicit or implicit propositional attitudes and such contexts are apparently necessary for ambiguities of attribution of description and the associated possible inference failure and for problems of postmodifier deletion. However, there are many other kinds of context in which other intension-related ambiguities and infer- ence failures can occur. For example, existential generalization can also fail in contexts of similarity and possibility: (20) Nadia is dressed like a creature from outer space. (21) =~There is a creature from outer space whom derived. That is, Nadia wants her particular dog to once or re- peatedly become like Ross's dog. However, we don't see these readings as distinct from (14) and (15); Nadia's desire is clearly to- wards the goal of having a dog that matches a particular descrip- tion, rather than towards that of owning a particular dog. 3Hofstadter, Clossman, and Meredith (1982) analyze a similar sen- tence for the case where the speaker and the agent are the same, I want the fastest car in the world, and derive five readings where we predict four. However, their two extensional readings are identical in our analysis, as they differ only in how many separate descriptions the agent has for the entity. 4This example is based on one of Fodor's (1980: 188). Fodor claims that postmodifier deletion is never valid in an opaque con- text; as example (17) shows, this claim is too strong. The problem in (19) seems to be that would hate means wants not, and the dele- tion is invalid in the scope of a negation. Nadia is dressed like. (22) It is possible that a creature from outer space could interrupt your lecture at the most incon- venient moment. (23) =/~ There is a creature from outer space who could possibly interrupt your lecture at the most incon- venient moment. The kind of semantic irregularities that we are discussing are thus found in a large and syntactically diverse set of linguistic constructs. (See Fawcett (1985) for a long list, with discussion and examples.) Many seem to display idiosyncratic semantic features that could necessitate a broad range of operators in a representation, destroying any apparent homogeneity of the class. It is our suggestion, however, that these constructs can be processed in a uniform way. We argue that the diversity among the constructs can be accounted for by evaluating descriptors according to intensionality, agents, time, and states of affairs. Introducing the concept of a descriptor preserves the homogeneity of the class, while the dimensions along which descriptors may vary provide enough detail to differentiate among the particular semantics of the constructs. 4. The descriptor representation In this section we introduce a representation designed to capture the different possible readings of opaque constructions. In developing the representa- tion, we have tried to move away from previous approaches to intensionality, such as that of Mon- tague (1973), which use truth conditions and mean- ing postulates, and which take no account of the beliefs or knowledge of agents. Influenced by recent work on situation semantics (Barwise and Perry 1983, Lespe'rance 1986) and belief logics, we have aimed for a more 'common-sense' approach. In the representation, we take an intension to be a finite representation of those properties that characterize membership in a class, and by a descrip- tor we mean a non-empty subset of the elements of an intension (in practice, often identical to the e0m- plete intension). A descriptor provides access either to the intension of which it is a part or to its exten- sion. This eliminates the need of explicitly listing all the known properties of an entity; only properties 194 relevant to the discourse situation are mentioned. The representation is described in detail in Fawcett (1985); below we give a short description of the main points, and some examples of its use. The representation is based on conventional tem- poral modal logic. The general form of.a completed sentential clause is a proposition of the form (term-list) <predication>. The term-list, which can be empty, contains all the quantified terms except those which are opaque with respect to agents or time; the predication expresses the main relation among the various entities referred to. The intention is that the term-list provides the information to identify referents in the knowledge base, and the main predication asserts new informa- tion to be added to it. Usually the argument posi- tions of the predication will be filled by bound vari- ables or constants, introduced previously in the term-list. However, within temporal operator or agent scopes, argument positions may instead con- tain quantified terms. Term-list-predicate pairs may be nested inside of one another. Quantified terms arise from noun phrases. They have the general form (Det X." R(X)) where Det is a quantifier corresponding to the expli- cit or implicit determiner of the noun phrase, X is the variable introduced, and R(X) indicates restric- tions on X. In the examples below, we restrict our- selves to only three quantifiers -- indcf, def, and label, introduced by indefinite descriptions, definite descriptions, and proper nouns respectively. 5 To this formalism, we add the following: • The agent scope marker ^. This marker can apply to a formula or term to indicate that any embedded descriptors must be evaluated with respect to the beliefs of the agents involved (that is, mentioned so far) at the point where the scope of begins. The speaker is assumed to always be available as an agent, and descriptors outside the scope of ^ are attributed only to the speaker. 5For simplicity, we treat names as extensional in our examples. However, there is nothing to prevent an opaque treatment, in which the different agents are thinking of different individuals with the same name. • The intensional abstractor int-abs. The formula int-abs ( C, ( Quant Var : Description)) asserts that the quantified term Var is to have an intensional referent (i.e., an individual or universal concept), which is returned in C. If C is subse- quently used, then its referent is a universal (gen- eric) concept, which we do not discuss in this paper; see Faweett (1985) for details. If Vat is used instead, then the referent is an individual concept. (Without int-abs, use of Vat refers to an extension.) • Descriptors. The notation [d X l indicates that the properties d are being used as a descriptor of entity X Thus its intensionality, time of application, and agent must be considered. (Variables over such descriptors are permitted, so we can manipulate them indepen- dently of the entities to which they might refer.) Thus, opacity with respect to agents and opacity with respect to time are both treated as scope ambi- guities, while intensionality is marked as a binary distinction. In general, all quantified terms are left- extraposed to the outermost term list. Those quantified terms marked as intensionally ambiguous may be prefixed by int-abs. Those quantified terms originating within the scope of the agent scope marker ^ may remain inside its scope and be evaluated relative to the agents available at that point. Similarly, those quantified terms originating in the scope of the temporal operators F and P (future and past) may stay inside their scope, thus indicating a future or past application of the descrip- tor. The following example shows the representations of the first four readings of (8) (i.e., those with the description applied at the time of the utterance), and an extensional counterpart. (In the examples, the quantifier indef corresponds to the English deter- miner a, and the quantifier label is used for proper nouns. The structure of the descriptor dog-like- Ross's, orthogonal to our concerns here, is not shown.) (24) Transparent reading, agent's description: There is a dog Nadia wants, and she describes it as being like Ross's dog. (label Y: Nadia) <want Y, ^ (indef X: [dog-like-ross's X]):> 195 (25) Transparent reading, speaker's description: There is a dog Nadia wants, and the speaker describes it as being like Ross's dog. (label Y : Nadia) (indef X: [dog-like-ross's 4) <want Y, ^X~ (26) Opaque reading, agent's description: Nadia wants something she describes as being a dog like Ross's. (label Y : Nadia) <want Y, ^ int-abs (C, (indef X: [dog-like-ross's 4))> (27) Opaque reading, speaker's description: Nuciia wants something that the speaker describes as being a dog like Ross's. (label Y: Nadia) int-abs (C, (indef X: [dog-like-ross's X~)) <wants Y, ^X> Note that the fourth reading has no representation in a conventional first-order modal language. For com- parison, here is a non-opaque sentence of the same structure. (28) Nadia buys a dog like Ross's. (label Y : Nadia) (indef X: [dog-like-ross's X]) <buy Y, X:> Within the scopes of the opaque operators F, P, and ^, special checks must be made before standard inference rules can apply. 6 We do nc'~ assume that all arguments are intensional; we favour a policy towards intensional scopes of "introduce when required" to minimize the amount of extra process- ing needed. Our use of the symbol ^ is quite different from that of Montague. For Montague, ^x denotes an object that is intensional. We instead use this notation to delimit the agent scope of an opaque construct; descriptors in x are potentially ascribed to any of the agents preceding the ^ marker. Our approach to determiners is a compromise between other common approaches. The first, com- mon in computational linguistics, is to represent determiners by three-place quantifiers of the general 6This is analogous to the restricted rules that Montague presents for substitution of identicals and lambda conversion in his inten- sional logic (Dowty, Wall, and Peters 1981: 165). We seek a more flexible scheme that, rather than prohibiting inference, restricts its use to certain special cases. form d,t (., P(.)) where x is the variable introduced, R is the restric- tion on the variable, and P is the new predication on the variable. This reflects observations of Moore (1981) and others that determiners rarely have a direct correlation with the existential and universal quantifiers of first-order logic. In many of the mean- ing representations used with logic grammars (Dahl (1981), for example), determiners provide the basic structure of the meaning representation formula. The determiners are translated into quantifiers and are all left-extraposed (to be later scoped relative to one another on the basis of some reasonably simple set of rules). As a result, the main predication of a clause will always be nested in the rightmost predica- tion position. Another approach focuses more on the main verbs by first translating them into predicates, and subse- quently finding appropriate fillers for their arguments that contain the necessary quantifiers. However, this does not allow a convenient way to represent relative scoping ambiguities. Montague combines the two approaches. All quantifiers introduce two predicates: a restriction predicate and a main predication as in kR kP (3z (R{z} AND P{z})), which translates the determiner a. Our approach is a compromise. Quantified terms consist of a variable and restriction, but do not incorporate the main predication. All quantified terms (except those that are opaque with respect to time or agent) are left-extraposed and assimilated into a single list structure followed by a single main predication. 5. Substitution of equals Given our descriptor logic, we can now turn to the question of when substitution-of-equals inferences can and can't be made. The failure of substitution of equivalent phrases appears to be a gradable notion; the degree of substi- tution allowed varies with the type of construct under consideration. We can think of a scale of sub- stitutivity, with the lower bound being a strictly de 196 dicto reading in which no substitutions are permitted and the upper bound a strictly de re reading in which co-extensional phrases can be substituted in any context. For example, sentences that refer directly to the form of the expression admit no substitution: (29) The Big Bopper was so called because of his size and occupation. (30) The Big Bopper was J. P. Richardson. (31) 5ff J. P. Richardson was so called because of his size and occupation. In sentences of propositional attitude, certain descriptors can be substituted for, provided the con- tent of the proposition, relative to the speaker and the hearer, is not affected. It is easy to recognize such cases, but not always easy to specify what exact criteria determine terms that are interchangeable. Consider: (32) Nadia thinks that the Queen of England is a lovely lady. (33) Nadia thinks that Queen Elizabeth is a lovely lady. (34) Nadia thinks that the titular head of the Church of England is a lovely lady. The assumption is that since the filler of the role Queen of England is not likely to change within the time of the conversation and the speaker, the hearer, and Nadia are all aware of who fills that role, it is acceptable to substitute the filler for the role and vice versa. Thus, sentence (33) can be inferred from (32). But to substitute the phrase the titular head of the Church of England, as in (34), seems to attribute more knowledge to Nadia than was in the original statement. The problem of substitution in opaque contexts stems from the failure to recognize how descriptors relate, and not, as in classical logical approaches, from the failure of expressions to be "co- intensional". The emphasis should be on identifying the relation between descriptors with respect to appropriate agents rather than on co-intensionality alone; in most cases co-intensionality is too strong a condition for substitution. Rather, the background assumptions of the discourse determine whether a substitution of one descriptor for another is permitted. A typical substitution replaces the target descrip- tor, dl, with an equivalent descriptor, d2, from the background assumptions, but otherwise preserves the form of the target sentence, i.e., RESULT ~ TARGET [dl/d2]. 7 To see whether a descriptor substitution is valid in an opaque context, three factors must be checked in the following order: the intensionality of the descrip- tor, the time of reference of the descriptor, and the agents of the descriptor. We must establish the "level" of each factor in the target sentence and then determine whether the background assumptions authorize substitutions at that level. That is, we must relate the intensionality, time, and agent of the descriptor equivalence asserted in the background assumptions to those of the target descriptor, and then assert the intensionality, time, and agent of the descriptors in the resulting clause (after any substitu- tions). The background assumptions will have already been derived from earlier input (in a manner described by Fawcett 1985, section 5.5) and assimi- lated into the system's general knowledge base. In order to compare descriptors in the target to descrip- tors in the background assumptions, we extract the relevant aspects from the representation of each, and express them explicitly by the use of the following descriptor predicates, which can then be used to query the knowledge base. • dese (a, e, dl). Ascribes a particular descriptor to an individual; "agent a would use the descriptor dl to describe the entity e". • label (a, c, name). Indicates that the label name is known by agent a to be a label for the (individual) constant c. • time (t, e, dl). Asserts that descriptor dl describes entity e at time t. As an example, consider the four readings of this sentence in which the description is applied at the time of utterance: 7Not all substitutions are of this form; see Fawcett 1985, section 5.4. 197 (35) Nadia wants the fastest car in the world. speaker's description: E ^x> speaker% description: (i) Extensional reading, (label Y: Nadia) (def X: [fcw )~) <want (ii) IntenMonal reading, (label Y: Nadia) int-abs (C, (def X: [fcw X])) <want Y, ^X> (iii) Extensional reading, agent's description: (label Y : Nadia) <want Y, ^(def X: [fcw X])> (iv) Intensional reading, agent's description: (label Y : Nadia) <want Y, ^int-abs (C, (def X: [few X]))> (fcw stands for the descriptor fastest-car-in-the- world.) Table I lists some different possible background assumptions. We will show the different effects of each. Background assumption I asserts the co- extensionality of the descriptors fastest ear in the world and Ross's Jaguar 300, while assumption II asserts co-intensionality of the descriptors. Assump- tions llI and IV express the same equivalences, and, additionally, knowledge of them is also attributed to Nadia. When the beliefs of agents (other than the listener) are not involved, the following rule licenses certain substitutions of equivalents: • If the target descriptor is intensional 8 then co- intensional or definitionally equivalent descriptors in the background assumptions may be substituted. Background assumptions I and II thus allow substitu- tions in readings (i) and (ii), as shown in table H. (For simplicity, the quantifier (label Y: Nadia) is omitted from each example.) When attribution of descriptions is involved, as in readings (iii) and (iv) of (35), we must determine whether the other agents are (believed by the listener to be) aware of the equivalence. The general rule for substituting descriptors which are ambiguous with respect to descriptive content is this: • If the assertion of descriptor equivalence in the background assumptions in the listener's knowledge base is part of the knowledge base of the agent to 8In this rule, the descriptor must not be generic. Rules for gener- ics (universal concepts) are described in Fawcett 1985, section 5.4. TABLE I BACKGROUND ASSUMPTIONS I The fastest car in the world is Ross's Jaguar 300. II The fastest car in the world (always) is a Jaguar 300. III Nadia believes that the fastest car in the world is Ross's Jaguar 300. IV Nadia believes that the fastest car in the world is a Jaguar 300. TABLE II SUBSTITUTIONAL INFERENCES (i) + I Nadia wants Ross's Jaguar 300. (def X: [ross's-jag300 X]) <wants Y, ^X> (i) + II Nadia wants a Jaguar 300. (def X: [jag300 X]) <wants Y, ^X> (ii) + I No substitution possible. (ii) + II Nadia wants a Jaguar 300. int-ab,(C, (def X: [jag300 X])) <wants Y, ^X> (iii) + Ill (iii) + rv (i~) + Ill (i~) + Iv Nadia wants Ross's Jaguar 300. <wants Y, ^ (def X: [ro~'sqag300 X])> Nadia wants a Jaguar 300. <wants Y, ^ (indef X: [jag300 X])> No substitution possible. Nadia wants some Jaguar 300. <wants Y, ^ int-abs (C, (indef X: [jag300 X]))> whom the target descriptor is ascribed, then the descriptor can be substituted in the target. The resulting clause will have the substituted descriptor attributed to the same agents as the descriptor in the original target. Reading (iii) requires a co-extensional descriptor that Nadia is aware of. Background assumptions IlI and IV both provide such a descriptor. Reading (iv) also requires a descriptor that Nadia is aware of, but it must be co-intensional with the target descriptor; only assumption IV provides such a descriptor which can then be substituted. The results are shown in table II. 198 Substitution rules for other intensional constructs, and details of interactions between rules, can be found in Fawcett (1985, section 5.4). 6. Implementation We have implemented a prototype system that incor- porates the ideas discussed above. The system is written in Prolog, and is built on top of Popowich's SAUMER formalism for syntactic and semantic rules (Popowich 1984, 1985). 7. Plans and goals Now that we have looked at the problem of detect- ing these ambiguities and representing the possible readings, the next step is to study how the ambigui- ties may be resolved, and what factors influence the preference for one reading over another. We expect that in most cases pragmatic factors will be central, although there may be default preferences in some constructions. In addition, another member of our group, Diane Horton, is studying the interaction between agents' descriptions and the presuppositions of a sentence (Horton 1986). Acknowledgements This paper is based on thesis work by the first author under the supervision of the second, who also wrote the paper. The authors acknowledge helpful discussions with each other, Diane Horton, and Hector Levesque, and finan- cial support from IBM, the Natural Sciences and Engineer- ing Research Council of Canada, and the University of Toronto. They are also grateful to Nick Cercone and Fred Popowich for making the SAUMER system available to them. References BARWISE, Jon and PERRY, John (1983). Situations and attitudes. Cambridge, M.A: The MIT Press / Bradford Books, 1983. DAHL, Veronica (1981). "Translating Spanish into logic through logic." American journal of computational linguistics, 7(3), 149-164. DowTY, David R; WALL, Robert E; and P~TERS, Stanley (1981). Introduction to Montague semantics (Synthese language library 11). Dordrecht: D. Reidel, 1981. FAGIN, Ronald and HALPERN, Joseph Y (1985). "Belief, awareness, and limited reasoning: Preliminary report." Proceedings of the Ninth International Joint Confer- ence on Artificial Intelligence, Los Angeles, August 1985. 491-501. FAWCETT, Brenda (1985). The representation of ambiguity in opaque constructs. MSc thesis, published as techni- cal report CSRI-178, Department of Computer Science University of Toronto, October 1985. FODOR, Janet Dean (1980). Semantics: Theories of mean- ing in generative grammar (The language and thought series). Cambridge, Mass.: Harvard University Press, 1980. HOFSTADTER, Douglas R; CLOSSMAN, Gary A; and MEREDITH, Marsha J (1982). " 'Shakespeare's plays weren't written by him, but by someone else of the same name.' An essay on intensionality and frame- based knowledge representation." Bloomington, Indi- ana: Indiana University Linguistics Club, November 1982. HORTON, Diane (1986). Incorporating agents' beliefs in a model of presupposition, MSc thesis, Department of Computer Science, University of Toronto, forthcoming (June 1986). LESP~RANCE, Yves (1986). "Toward a computational interpretation of situation semantics." Computational intelligence, 2(1), February 1986. LEVESQUE, Hector (1983). "A logic of implicit and explicit belief." Proceedings of the National Conference on Artificial Intelligence (AAAI-88), Washington, D.C., August 1983, 198-202. MONTAGUE, Richard (1973). "The proper treatment of quantification in ordinary English." [11 In: Hintikka, Kaarlo Jaakko Juhani; Moravcsik, Julius Matthew Emil and Suppes, Patrick Colonel (editors). Approaches to natural language: Proceedings of the 1970 Stanford Workshop on Grammar and Semantics. Dordrecht: D. Reidel, 1973. 221-242. [2] In: Thomason, Richard Hunt (editor). Formal Philosophy: Selected Papers of Richard Montague. New Haven: Yale University Press, 1974. 247-270. MOORE, Robert C (1981). "Problems in logical form." Proceedings of the 19th Annual Meeting, Association for Computational Linguistics, Stanford, June 1981, 117-124. POPOWICH, Fred (1984). "SAUMER: Sentence analysis using metarules." Technical report 84-2, Laboratory for Com- puter and Communications Research, Simon Fraser University, Burnaby, B.C., Canada. August 1984. PoPowIG~, Fred (1985). "The SAUMER user's manual." Technical report 85-4, Laboratory for Computer and Communications Research, Simon Fraser University, Burnaby, B.C., Canada. March 1985. 199
1986
30
A PROPERTY-SHARING CONSTRAINT IN CENTERING Megumi Kameyama Department of Computer and Information Science The Moore School of Eleelrical Engineering/D2 University of Pennsylvania Philadelphia, PA 19104 ABSTRACT 1 A constraint is proposed in the Centering approach to pronoun resolution in discourse. This "property-sharing" constraint requires that two pronominal expressions that retain the same Cb across adjacent utterances share a certain common grammatical property. This property is expressed along the dimension of the grammatical function SUBJECT for both Japanese and English discourses, where different pronominal forms are primarily used to realize the Cb. It is the zero pronominal in Japanese, and the (unstressed) overt pronoun in English. The resulting constraint complements the original Centering, accounting for its apparent violations and providing a solution to the interpretation of multi-pronominal utterances. It also provides an alternative account of anaphora interpretation that appears to be due to structural parallelism. This reconciliation of centering/focusing and parallelism is a major advantage. I will then add another dimension called the "speaker identification" to the constraint to handle a group of special eases in Japanese discourse. It indicates a close association between centering and the speaker's viewpoint, and sheds light on what underlies the effect of perception reports on pronoun resolution in general. These results, by drawing on facts in two very different languages, demonstrate the cross-linguistic applicability of the centering framework. using this notion. 2 Centers are semantic objects--(sets of) individuals, objects, states, actions, or events--represented in complex ways so that a strict coreferenee need not hold between anaphorically related terms? A center mentioned in the current utterance may be mentioned again in the next utterance (by the same or a different speaker). In this sense, a center is "forward-looking" (CD. Crucially, one of the centers may be identified as "backward-looking" (Cb). Cb is the entity an utterance most centrally concerns. Its main role is to connect the current utterance to the preceding one(s). 4 The term the Center is also used for the Cb. Thus an utterance may be associated with any number of Cfs, one of which may be the Cb. These Cfs are given a default expected Cb order, that is, "how much each center is expected to be the next Cb". I regard Cb to be optional for an utterance. 5 It comes into exsistence by way of a Cb.establishment process, that is, the process in which a previous non-Cb becomes the new Cb in discourse. Sidner's (1981, 1983) immediate focus and potential foci in local focusing correspond to Cb and Cfs, respectively. The difference is that Sidner uses two immediate foci (Discourse Focus and Actor Focus) while centering uses only one (Cb) (see Grosz et. al. 1983 for discussion). Various factors --syntactic, semantic, and pragmatic-- are combined for the identification of the Cb. One of them is the use of pronominal expressions, as expressed in the 1. Introduction Grosz, Joshi, & Weinstein (1983) postulated that each utterance in discourse concerns a set of entities called the centers, and discussed how certain facts of local discourse connectedness (as opposed to global) can be accounted for IThis work was supported in parts by the Center for the Study of Language and Information at Stanford University and by grants from the National Science Foundation (DCR84-11726) for the Department of Computer and Information Science and from the Alfred P. Sloan Foundation for the Cognitive Science Program at the University of Pennsylvania. 2In a theory of discourse structure that consists of three interacting components, linguistic, intentional, and attentional (Grosz & Sidner 1985), centers are found in the local attentional structure. 3SCe Sidner's (1979) focus representation, for instance. 4The notion of centering originally comes from Joshi & Weinstein (1981). 5We can view Cb either optional or obligatory for each utterance. The difference seems more conceptual than substantial since what is crucial for providing a referent candidate is the expected Cb order given to the Cf set whether this set contains the Cb or not. Relative merits of each approach should be clarified in the future. 200 original Centering rule (Grosz et. al. 1983): 6 2. The SUBJECT constraint (1) If the Cb of the current utterance is the same as the Cb of the previous utterance, apronoun should be used. (1) is stated as a heuristic in the production of English. It is assumed that an equivalent interpretation heuristic is used by a hearer. Roughly, a pronoun "realizes" the current Cb that continues the previous Cb. 7 In this paper, I will first point out certain facts that the basic Centering rule does not explain, then propose a further constraint that substantiates the basic rule. This is called the "property-sharing" constraint, which requires that two pronominal elements realizing the same Cb in adjacent utterances share a certain common grammatical property. This shared property itself is expressed as a default preference order reflecting the nature of the constraint as a discourse rule. The initial formulation of the constraint only refers to the gratnmatical function SUBJECT. It explains the problem cases for the basic Centering rule in Japanese and English. It also accounts for a subset of what appears to be an effect of structural parallelism in anaphora interpretation. Then I will propose an additional dimension of the shared property called the "speaker identification" property. The revised constraint referring to both dimensions accounts for a group of counterexamples to the initial formulation found in Japanese discourse. It also sheds fight on what is involved in interpreting perception reports in both languages. Before starting the discussion, I would like to comment on the nature of the data used here. I will mostly use constructed discourse sequences where the role played by commonsense inferences or special linguistic devices (such as slzess and intonation) for guiding pronoun interpretations is minimal. All examples in this paper are to be read with fiat intonation with unstressed pronouns. These limitations are in order to identify the grammatically-based default order that gives rise to preferred interpretations in neutral contexts. Note that this default order alone does not determine interpretations of pronominal elements. Rather, its role in the centering framework is to give an ordered fist of referents (centers) so that commonsense inferences can be controlled. Interpretations and acceptability judgements of the examples in this paper result from interviews with a number of native speakers in each language. 2.1. Japanese In Japanese, the expression primarily used to realize the Cb is the zero pronominal O.e., unexpressed subject or object), a The grammatical features (e.g., gender, number, person) of these unexpressed subjects and objects are not morphologically marked elsewhere in the sentence, which distinguishes them from the so-called "pro-drop", such as the unexpressed finite clause subject in Italian and Spanish whose grammatical features are morphologically marked on the verb inflection. The basic Centering role in Japanese can be obtained by changing the word pronoun to zero pronominal in (1) (Kameyama 1985). In the following discourse fragment, it is reasonable to assume that Rosa is the Cb of the second utterance: 9 (2) 1. Rosa wa dare o matte-iru no-desu ka. Rosa TP-SB who OB is-waiting-for ASN Q "Who is Rosa waiting for?" 2. • Mary o matte-iru no-desu. SB Mary OB is-waiting-for ASN "[She] is waiting for Mary." [Cb=Rosa] It seems equally reasonable to assume that Rosa is the Cb of the second utterance in the following variation of (2): O) 1. Dare ga Rosa o matte-iru no-desu ka. who SB Rosa OB is-waiting-for ASN Q "Who is waiting for Rosa?" 2. Mary ga • matte-iru no-desu. Mary SB OB is-waiting-for ASN "Mary is waiting for [her]." [Cb=Rosa] If the Cb-status of an entity is homogeneous, we would expect that the two instances of the Cb above have exactly the same effect, if any, on the subsequent utterance. When an identical third utterance is added to both, however, it becomes clear that the centered individual Rosa is not of an equal status in the two cases: 6Grosz et. al. (in preparation) propose various constraints on this rule, and, among other things, distinguish between the retention and continuation of the Cb. I will use the words retain and continue in non-technical sense in this paper. tAn expression realizes a center rather than denoting it. Realization allows either a value-free or value-loaded interpretation (see Grosz et. al. 1983 for discussion). SZero pronominals are also found in Chinese, Korean, Vietnamese, Thai, etc. I will also call them "zero-subject", "zero-object", and so on. 9The following symbols are used for grammatical markers in the gloss: SB (subject), OB (direct object), 02 (indirect/second object), TP (topic), ASN (assertion), CMP (complementizer), Q (question). The symbol • is used for a zero pronominal, and its translation equivalent appears in []. 201 (4) • Yuusyoku ni syootaisi-tano-desu. SB OB supper to invited ASN "[She] invited [her] to dinner." after(2): [strong preference: Rosa invited Mary] after(3): [weak preference: Mary invited Rosa] (5) Rosa ni yuusyoku ni syootais-are-ta no-dcsu. SB Rosa by supper to was-invited ASN "[She] was invited by Rosa to dinner." (she=: Mary) I° after(2): marginal (*?) after(3): acceptable The extension (4) is a multi-zero-pronominal utterance. The zero-subject and zero-object pronominals receive reverse interpretations depending on whether the utterance follows (2) or (3). Although this fact by itself does not contradict the basic rule (I), it poses a question as to which zero pronominal in (4) realizes its Cb. There are the following two possibilities. If the previous Cb continues to be the current Cb by default, it follows that the choice of the Cb-realizing zero pronominal depends entirely on the preceding discourse context. On the other hand, if some inherent property of a zero pronominal (e.g., subject/object) independently decides which one realizes the Cb, the previous context need not be considered. For instance, if a zero-subject is always more closely associated with the Cb than a zero-object, the discourse sequence (3) to (4) changes the Cb from Rosa to Mary. In the extension (5), Rosa (the previous Cb) is mentioned with a full name while the single zero pronominal picks out a previous non-Cb, Mary. If Rosa is still the Cb here, this utlerance violates the basic Centering rule, so the rule predicts unacceptability, which is indeed the case following the sequence (2). rl The same rule, however, provides no clue for the puzzling acceptability of the same extension following the sequence (3). Moreover, it is possible that Rosa is no longer the Cb in (5), in which case, rule (1) simply does not apply. Examples like these are the basis for the first version of the Centering Constraint: (6) Centering Constraint [Japanese] (1st approximation) Two zero pronominals that retain the same Cb in adjacent utterances should share one of the following properties: Io=: indicates the association between a linguistic item (leR-hand side) and a non-linguistic entity (right-hand side). llNote that violating a discourse rule like (1) leads to more difficulty in understanding rather than clear-cut "ungrammaticality". SUBJECT or nonSUBJECT. 12 (6) says that two zero pronominals supporting the same Cb in adjacent utterances should both be either SUBJECT or nonSUBJECT. In the case of discourse extension (4) above, if the Cb is still Rosa, it should be realized with a zero-subject after the sequence (2) and with a zero-object after (3). This is shown below: (7) 1. [Cb<SUBJ> = Rosa] <-(2)-2 2. [Cb<SUBJ> = Rosa] <--(4) [strong preference] (s) I. [Cb<OBJ> = Rosa] <-(3)-2 2. [Cb<OBJ> = Rosa] <-(4) [weak preference] I aUribute the different degree of preference between (7) and (8) to the difference in canonicality of centering. A Cb continued with zero-subjects as in (7) is more stable, or more canonical, than one continued with zero-objects as in (8), which is but one manifestation of the overall significance of SUBJECT in centering. 13 This leads to the second approximation of the Centering Constraint: (9) Centering Constraint [Japanese] (2nd approximation) Two zero pronominals that retain the same Cb in adjacent utterances should share one of the following properties (in descending order of preference): 1) SUBJECT, 2) nonSUBJECT. Constraint (9) predicts that retaining a Cb is good when the two pronominals are both either SUBJECT or nonSUBJECT while it is bad (i.e., leading to complex inferences) when one is SUBJECT and the other is not, This in turn predicts that changing the Cb across adjacent utterances is acceptable when the two pronominals have different properties while it is not when they are of the same property. The difference in acceptability between sequence (2) to (5) (marginal) and sequence (3) to (5) (acceptable) would then follow from this constraint. The former is bad because it changes the Cb with two SUBJECT zero pronominals, as shown in (10). The latter is good because it changes the Cb with different zero pronominals (from OBJECT to SUBJECT), as shown in (11): (I0) 1. [Cb<SUBJ> = Rosa] <-(2)-2 2. "2 [Cb<SUBJ> = Mary] <--(5) [marginal] (11) 1. [Cb<OBJ> -- Rosa] <-(3)-2 2. [Cb<SUBJ>=Mary] <-(5)[acceptable] 12I'm refen'ing to the "surface" grammatical function SUBJECT. 13The importance of SUBJECT in centering is also discussed in Grosz et. al. (in preparation). 202 The acceptability of the Cb-shift shown in (11) above contrasts with the unacceptability of retaining the Cb with these pronominals. The latter in fact appeared in the above example as the nonpreferred reading of sequence (3) to (4), which is shown in (12): (12) 1. [Cb<OBJ> = Rosa] 2. ?? [Cb<SUBJ> = Rosa] 2.2. Engfish The following sequences in English are equivalent to those in Japanese (2) to (5): (13) 1. Who is Max waiting for? 2. He is waiting for Fred. [Cb<SUBJ>=Max] 3a. He invited him to dinner. [strong preference: Max invited Fred] 3b. ?* He was invited by Max to dinner. (14) 1. Who is waiting for Max? 2. Fred is waiting for him. [Cb<nonSUBJ>=Max] 3a. He invited him to dinner. [weak preference: Fred invited Max] 3b. (.9) He was invited by Max to dinner. The evaluation of the third utterance parallels the Japanese example. This indicates that the SUBJECT-based constraint stated in (9) for Japanese is applicable to English together with all the analogous consequences discussed above. The constraint is restated below for pronominal expressions in general: (15) Centering Constraint [general] (approximation) Two pronominal expressions that retain the same Cb in adjacent utterances should share one of the following properties (in descending order of preference): 1) SUBJECT, 2) nonSUBJECT. The particular kind of pronominal expressions relevant here vary from language to language. Kameyama (1985: Ch.1) hypothesized that it is the pronominal element with the "less phonetic content" for each grammatical function of a language 14 and that it is predictable from the typological perspective on available pronominal forms. For instance, it is the unstressed pronoun in English where pronouns must always be overt, and it is the zero pronominal in Japanese where pronouns with no phonetic 14It is possible that only certain grammatical functions (e.g., SUBJECT, OBJECT, and OBJECT2) are relavant.to the Cb. This will have to be clarified in the future. content exist (for subjects and objects). It is further predicted that morphologically bound pronominal forms (i.e., agreement inflections, clitics, and affixes) rather than full independent pronouns are used for Cb-realization if a language has this option. For instance, this option exists for the finite clause subject in Italian and Spanish in terms of the agreement inflection, and for the t'mite clause subject and object in Warlpiri in terms of clities. The constraint in English is stated below: (16) Centering Constraint [English] Two unstressed pronouns that retain the same Cb in adjacent utterances should share one of the following properties (in descending order of preference): 1) SUBJECT, 2) nonSUBJECT. 2.3. Accounting for the effect of parallelism in Cb-establlshment The given property-sharing constraint has so far been proposed for pronominal elements that retain the same Cb in adjacent utterances. By its reference to the grammatical property SUBJECT, the constraint indicates that adjacent utterances of the same Cb cohere even better when there is a certain degree of grammatical parallelism. Analogous constraints account for at least two other kinds of parallelism effects on pronoun interpretation in English. They are in the context of what I call the Cb.establishmem, that is, the process in which a previous non-Cb becomes the Cb. The case of Cb-shift is a subease of Cb-establishment.15 Ambiguous multi-pronouns. The interpretation of a multi-pronominal establishes a Cb. An example follows: (17) 1. Max is waiting for Fred. 2. He invited him to dinner. [preference: Max invited Fred] first is the utterance that (17) shows that when two pronouns are potentially ambiguous in reference, the preferred interpretation conforms to a property-sharing constraint. That is, there is a higher tendency that the SUBJECT pronoun corefers with the SUBJECT of the previous utterance. It is crucial here that (a) there is more than one pronoun and Co) two (or more) of them are potentially ambiguous (i.e., of the same grammatical features). Otherwise, the process of Cb-establishment need not be constrained by the 15In the present approach, the default "expected Cb" is the (matrix) SUBJECT referent, and the Cb is established in the next utterance with a (matrix) (SUBJECT) pronoun, ff there is one. More factors such as TOPIC (wa-marking) and Ident (see below) are also relevant to the centering in Japanese. These are discussed in the longer paper in preparation. 203 property-sharing, as illustrated in the following examples: (18) [single pronoun] 1. Carl is talking to Tom in the Lab. 2. Terry was just looking for him. [preference: h/m=: Carl] (19) [unambiguous two pronouns] 1. Max is waiting for Susan. 2. She invited him to dinner. (18)-2 has only one pronoun and (19)-2 has two pronouns with different gender. In both cases, the nonSUBJECT pronoun naturally corefers with the previous SUBJECT. The property-sharing constraint becomes relevant only in the case of completely ambiguous multi- pronouns as in (17). Note that this in turn explains why the property-sharing was first recognized for zero pronominals, which lack gender/number/person distinctions altogether. Explicitly signalled parallelism. The second relevant type of parallelism effect is found in a discourse sequence with explicit linguistic signals for a parallel structure. Examples follow: (20) [Contrast this with (18)] 1. Carl is talking to Tom in the Lab. 2. Terry wants to talk to him too. [preference: h/~: Tom] (21) [from Sidner 1979:179] 1. The green Whitierleaf is most commonly found near the wild rose. 2. The wild violet is found near it too. <it=: wild rose> Parallelisms in (20) and (21) are clearly signalled with (i) the same verbal expressions (talk to and be found near) and (ii) the word too. In such cases, a version of the property-sharing scheme would propose the correct specification of the single pronoun as the first choice. Since the pronouns are nonSUBJECT, they should co-specify with the nonSUBJECT in the first utterance, which are Tom and the wild rose, respectively. 16 Significant here is the fact that (21) was a problem case for Sidner's (1979) focusing-based pronoun interpretation algorithm. She in fact concluded that pronoun interpretation involving structuralparaUelism was a source for anaphora inherently different from focusing: "Focussing cannot account for the detection of parallel structure, not only because the computation of such structure is poorly understood, but also because focussing chooses different defaults for co-specification than those required for paraUelism."(p.236) If a property-sharing constraint is invoked in interpreting 161"he property of nonSUBJECT may have to he broken up into subclasses (possibly into each grammatical function) when there are more than one nonSUBJECTs in the first utterance. (21)-2, the "wild rose" (nonSUBJECT) overrides the default expected Cb, the "green Whitierleaf' (SUBJECT), as the first-choice referent for the pronoun it (nonSUBJECT). The major advantage of the present property-sharing constraint is its role in combining the effects of both focusing/centering and structural parallelism. 3. The speaker identification constraint 3.1. Ident Although correct in most cases, the Centering Constraint as stated in (9) is systematically violated by a certain group of counterexamples in Japanese. This has to do with what Kuno calls empathy, a grammatical feature especially prominent in Japanese, defined as follows: (22) Empathy (Kuno & Kaburagi 1977:628) Empathy is the speaker's identification, with varying degrees, with a person who participates in the event that he describes in a sentence. I will call it the speaker identification, or simply, /dent/ficat/on. 17 When the main predicate of an utterance selects one of its arguments for the identifu:ation locus (henceforth Ident), the speaker automatically identifies (with varying degrees) with the viewpoint of its referent (usually human). The unmarked Ident is the SUBJECT, but some verbs have nonSUBJECT Ident. For instance, among giving/receiving verbs, ageru 'give' and morau 'receive' have SUBJECT Ident, while kureru 'give' has OBJECI'2 Ident, Is and for going/coming verbs, /ku 'go' has SUBJECT Ident while kuru 'come' has nonSUBJECT Ident. Each Ident feature is carried over in a complex predicate made with one of these verbs as the "higher" predicate (e.g., V.kureru 'give the favor of V-ing' Ident=nonSUBJ). Counterexamples to the constraint stated in (9) are cases with verbs of nonSUBJECT Ident: IT"Identification" is a better term than "empathy" in conveying the lack of speaker's emotional involvement and, moreover, it was used in the original definition of empathy in (22). The basic characterization of this notion is fully credited to Kuno and Kaburagi, however. lSOBJECT2 is the indirect or second object. 204 (23) 1. Masao wa Arabia-go o naratte-iru. MasaoTP-SB Arabic OB is-learning "Masao is learning Arabic." 2. Aruhi ~ Arabia-zin no zyosei ni atta. one-xlaySB Arabian of lady to met "One day [he] met an Arabian lady." <Ident=SUBJ> [Cb<SUBJ>=Masao] 3. ~ 4~ Iroiro sinsetu-ni site-kureta. SB 02 variously kindly do-gave "~ gave various kind favors to ~." <Ident=OBJ2> [strong preference: The lady gave favors to Masao] <zero-SUB J=: lady, zero-OBJ2=: Masao> The preferred reading of (23)-3 shows that the zero-Ident-OBJ2 is preferred over the zero-nonIdent-SUBJ for carrying over the Cb previously realized with a zero-Ident-SUBJ. In other words, when Ident and SUBJECT are split, Ident overrides SUBJECT as the stronger shared property for the zero pronominals that retain the same Cb across adjacent utterances. Based on the interpretation of various SUBJ/Ident combinations (see Kameyama 1985 Ch.2 for more details), the constraint is restated as follows: 19 (24) Centering Constraint [Japanese] (final version) Two zero pronominals that retain the same Cb in adjacent utterances should share one of the following properties (in descending order of preference): 1) Ident-SUBJECT, 2) ldent alone, 3) SUBJECT alone, 4) nonldent.nonSUBJECT. The resulting constraint substantiates the role of the zero pronominal in the context of centering in Japanese discourse. The constraint in English need not incorporate the Ident property, however. According to Kuno & Kaburagi (ibid.), there is only a handful of verbs with SUBJECT Ident (e.g., marry, meet, run into, hear from, receive from) and only one with nonSUBJECT Ident (come up to), none of which propagate with an operation like the Japanese complex verb formation. Moreover, even using these verbs, the Ident effect on pronoun interpretation is not at all clear in English. 2° The lack (or dispensability) of the speaker identification constraint does not mean that English centering is less constrained, because English pronouns are inherently more constrained than Japanese zero pronominals by the presence of grammatical fealaLres, gender, number, and person. We can view the Ident feature of Japanese zero pronominals as a way to make up for the lack of gender/number/person information available in overt pronouns. The SUBJECT constraint stated in (16), which is simply a subpart of the constraint in Japanese, thus remains adequate in English. 3.2. Perception verbs: possible link to Ident Perception verbs like see/hear, look~sound, etc. anchor the speaker's perspective just like Japanese Ident verbs. For example: (25) 1. Dan went to a party yesterday. 2. He saw his high school friend Jim. [Cb<SUBJ>=Dan] 3. He looked awfully pale. [preference: Jim looked pale (to Dan).] (26) 1. Maria finally got her phone reconnected. 2. She called her sister Bella. [Cb<SUBJ>=Marla] 3. She sounded depressed. [preference: Bella sounded depressed (to Maria)] Equivalent sequences in Japanese give rise to the same interpretation, that is, the single pronominal element in the third utterance picks out the previous non-Cb. This exceptional case can be explained if verbs like look and sound are used to describe states perceived from the viewpoint of the individual the speaker currently 'identifies with'. As a consequence, the SUBJECT referent of such a description is typically other than the one currently identical with. By making the previous Cb "the individual the speaker currently identifies with", the preferred readings of (25)-3 and (26)-3 can be explained. This indicates that the speaker's viewpoint is closely related to the Cb whether or not there is an ldent-based constraint in 19Implicit here are two weakest properties to be shared: 5) nonldent alone and 6) nonSUBJECT alone. These were left out because of the scarcity of actual instances in discourse. I found, however, that exactly the same scale of shared properties accounts for the possibility of the /ntra-sentential zero pronominal binding in Japanese, and that the full scale of six properties is actually needed for it (see Kameyama 1986). 2 e °Consider the following xample: 1. John is my brother. 2. He met Peter at a conference last weekend. <IdentffiSUBJ> 3. He came up to him and shook his hand. <IdentfnonSUBJ> The third utterance should read "Peter came up to John" if Ident overrides SUBJ. More speakers gave the reverse interpretation, however, showing the preference for the SUBJ-SUBJ coreference. 205 the language. Although there is a close relationship between Ident and these perception reports, the 'grammatical' status of the latter is not very clear. In particular, it is questionable whether the effect of perception verbs should be differentiated from commonsense-based interpretations as in the following example: Sam hit Bill on the head. He hit him back on the chin. It is an area open for more detailed studies in the future. 4. Conclusions Within the framework of the Centering approach to pronoun resolution in discourse, I have proposed an additional constraint for Japanese and English. This property-sharing consln~int requires that two pronominal expressions that retain the same Cb across adjacent utterances share a c~Lain common grammatical property. This property has been identified in two dimensions. One has to do with the grammatical function SUBJECT, and the other has to do with the speaker identification property Ident. The latter is necessary for Japanese discourse where the primary Cb-realizcr is the zero pronominal, but not for English discourse where it is the (unstressed) overt pronoun. The resulting constraint complements the original Centering rule, accounting for its apparent violations and providing a solution to the interpretation of multi-pronominal utterances. Two significant implications of the proposed constraint have been discussed. First, the SUBJECT constraint provides an alternative account of anaphora interpretation that appears to be due to structural parallelism. This reconciliation of centering/focusing and parallelism is a major advantase of this constraint. Second, the speaker identification constraint found in Japanese indicates a close association between centering and the speaker's viewpoint. In particular, it sheds light on what underlies the effect of perception reports on pronoun resolution. These results, by drawing on facts in two very different languages, demonstrate the cross-linguistic applicability of the centering framework in general. The present property-sharing constraint highlights a grsmmatical aspect that contributes to local discourse coherence. It will be integrated into the default rules which, by ordering the candidate referents for a pronominal expression, control the pragmatic inferences involved in pronoun resolution. ACKNOWLEDGEMENTS My special thanks go to Barbara Grosz for her guidance and encouragement for the work from which this paper developed. I have also greatly profited from discussions with Aravind Joshi and comments on an earlier version by N. Abe, M. Papalaskari, R. Rubinoff, J. Smudski, and B. Webber. REFERENCES Joshi, Aravind and Scott Weinstein. (1981) Control of Inference: Role of Some Aspects of Discourse Structure- Centering. In Proceedings of the International Joint Conference on Artificial Intelligence. Vancouver, B.C.: 385-387. Grosz, Barbara J.; Aravind K. Joshi; and Scott Weinstein. (1983) Providing a Unified Account of Definite Noun Phrases in Discourse. In Proceedings of the 21st Annual Meeting of the ACL. Association of Computational Linguistics, Cambridge, Mass: 44-50. Grosz, Barbara J.; Aravind K. Joshi; and Scott Weinstein. (in preparation) Towards a Computational Theory of Discourse Interpretation. MS. SRI International AI-Center, Menlo Park, CA. Grosz, Barbara J. and Candace L. Sidner. (1985) The Structure of Discourse Structure. Report No. CSLI-85-39, Center for the Study of Language and Information, Stanford, California. (To appear in Computational Linguistics 1986.) Kameyama, Megumi. (1985) Zero Anaphora: The Case of Japanese. Ph.D. dissertation, Stanford University, Stanford, California. Kameyama, Megumi. (1986) Japanese Zero Pronominal Binding: Where Syntax and Discourse Meet. Paper presented at the Second SDF Workshop in Japanese Syntax. Center for the Study of Language and Information, Stanford, California, March 7-9. Kuno, Susumu and Etsuko Kaburagi. (1977) Empathy and Syntax. Linguistic Inquiry 8: 627-672. Sidner, Candace L. (1979) Towards a Computational Theory of Definite Anaphora Comprehension in English Discourse. Technical Report TR-537, MIT AI Lab, Cambridge, Mass. Sidner, Candace L. (1981) Focusing for Interpretation of Pronouns. American Journal of Coraputational Linguistics. 7(4): 217-231. Sidner, Candace L. (1983) Focusing in the Comprehension of Definite Anaphora. In: Michael Brady and Robert C. Berwick, Eds., Computational Models of Discourse. MIT press, Cambridge, Mass. 206
1986
31
A MODEL OF PLAN INFERENCE THAT DISTINGUISHES BETWEEN THE BELIEFS OF ACTORS AND OBSERVERS Martha E. Pollack Artificial Intelligence Center and Center for the Study of Language and Information SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 ABSTRACT Existing models of plan inference (PI) in conversation have as- sumed that the agent whose plan is being inferred (the actor) and the agent drawing the inference (the observer) have iden- tical beliefs about actions in the domain. I argue that this as- sumption often results in failure of both the PI process and the communicative process that PI is meant to support. In par- ticular, it precludes the principled generation of appropriate responses to queries that arise from invalid plans. I describe a model of P1 that abandons this assumption. It rests on an analysis of plans as mental phenomena. Judgements that a plan is invalid are associated with particular discrepancies be- tween the beliefs that the observer ascribes to the actor when the former believes that the latter has some plan, and the be- liefs that the observer herself holds. I show that the content of an appropriate response to a query is affected by the types of any such discrepancies of belief judged to be present in the plan inferred to underlie that query. The PI model described here has been implemented in SPIRIT, a small demonstration system that answers questions about the domain of computer mail. INTRODUCTION The importance of plan inference (PI) in models of conversa- tion has been widely noted in the computational-linguistics lit- erature. Incorporating PI capabilities into systems that answer users' questions has enabled such systems to handle indirect speech acts [13], supply more information than is actually re- quested in a query [2], provide helpful information in response to a yes/no query answered in the negative [2], disambiguate requests [17], resolve certain forms of intersentential ellipsis [6,11], and handle such discourse phenomena as clarification subdialogues [11], and correction or "debugging ~ subdialogues The research reported in this paper has been made possible in part by an IBM Graduate Fellowship, in part by a gift from the Systems Develop- ment Foundation, and in part by support from the Defense Advanced Re- search Projects Agency under Contract N00039-84-K.0078 with the Space and Naval Warfare Command. The views and conclusions contained in this document are those of the author and should not be interpreted as representative of the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the United States Gov- ernment. ! am grateful to Barbara Grosz, James Allen, Phil Cohen, Amy Lansky, Candy Sidner and Bonnie Webber for their comments on an earlier draft. [16,11]. The PI process in each of these systems, however, has as- sumed that the agent whose plan is being inferred (to whom I shall refer as the actor), and the agent drawing the infer- ence (to whom I shall refer as the observer), have identical beliefs about the actions in the domain. Thus, Allen's model, which was one of the earliest accounts of PI in conversation 1 and impired a great deal of the work done subsequently, in- cludes, as a typical PI rule, the following: "SBAW(P) ~i SBAW(ACT) if P is a precondition of ACT" [2, page 120]. This rule can be glossed as "if the system (observer) believes that an agent (actor) wants some proposition P to be true, then the system may draw the inference that the agent wants to perform some action ACT of which P is a precondition." Note that it is left unstated precisely who it is--the observer or the actor---that believes that P is a precondition of ACT. If we take this to be a belief of the observer, it is not clear that the latter will infer the actor's plan; on the other hand, if we consider it to he a belief of the actor, it is unclear how the observer comes to have direct access to it. In practice, there is only a single set of operators relating preconditions and ac- / tion* in Allen's system; the belief in question is regarded as being both the actor's and the observer's. In many situations, an assumption that the re~v~nt beliefs of the actor are identical with those of the observer results in failure not only of the PI process, but also of:~he commu- nicative process that PI is meant to suppgrt.-In particular, it precludes the principled generation of appropriate responses to queries that arise from invalid plans. In this paper, I report on a model of Pl in conversation that distinguishes between the beliefs of the actor and those of the observer. The model rests on an analysis of plans as mental phenomena: ~having a plan s is analyzed as having a particular configuration of k,c- lids and intentions. Judgements that a plan is invalid are associated with particular discrepancies between the beliefs that the observer ascribes to the actor when the former be- lieves that the latter has some plan, and the beliefs observer herself holds. I give an account of different types of plan in- validities, and show how this account provides an explanation for certain regularities that are observable in cooperative re- sponses to questions. The PI model described here has been implemented in SPIRIT, a small demonstration system that answers questions about the domain of computer mail. More 'Allen's article Izl summarizes his dissertation r ..... ch Ill. 207 extensive discussion of both the PI model and SPIRIT can be found in my dissertation [14]. PLANS AS MENTAL PHENOMENA We can distinguish between two views of plans. As Bratman [5, page 271] has observed, there is an ambiguity in speaking of an agent's plan: "On the one hand, [this] could mean an appropriate abstract strncture--some sort of partial function from circumstances to actions, perhaps. On the other hand, [it] could mean an appropriate state of mind, one naturally describable in terms of such structures! We might call the former sense the data-structure view of plans, and the latter the mental phenomenon view of plans. Work in plan synthe- sis (e.g., Fikes and Nilsson [8], Sacerdoti [15], Wilkins [18], and Pednault [12]), has taken the data-structure view, con- sidering plans to be structures encoding aggregates of actions that, when performed in circumstances satisfying some speci- fied preconditions, achieve some specified results. For the pur- poses of PI, however, it is much more useful to adopt a mental phenomenon view and consider plans to be particular configu- rations of beliefs and intentions that some agent has. After all, inferring another agent's plan means figuring out what actions he "has in mind," and he may well be wrong about the effects of those intended actions. Consider, for example, the plan I have to find out how Kathy is feeling. Believing that Kathy is at the hospital, I plan to do this by finding out the phone number of the hospital, calling there, asking to be connected to Kathy's room, and finally saying "How are you doing?" If, unbeknownst to me, Kathy has already been discharged, then executing my plan will not lead to my goal of finding out how she is feeling. For me to have a plan to do fl that consists of doing some collection of actions I1, it is not necessary that the performance of II actually lead to the performance of ft. What is necessary is that I believe that its performance will do so. This insight is at the core of a view of plans as mental phenomena; in this view a plan "exists"--i.e., gains its status as a plan--by virtue of the beliefs, as well as the intentions, of the person whose plan it is. Further consideration of our common-sense conceptions of what it means to have a plan leads to the following analysis [14, Chap. 312: (PO) An agent G has a plan to do fl, that consists in doing some set of acts II, provided that 1. G believes that he can execute each act in I1. 2. G believes that executing the acts in I1 will entail the performance of ft. 3. G believes that each act in I/plays a role in his plan. (See discussion below.) 4. C intends to execute each act in I1. 5. G intends to execute II as a way of doing B. 2Although this definition ignores some important issues of commitment over time, as discussed by Bratman [4] and Cohen and Levesque [71, it is sufficient to support the PI process needed for many question-answering situations. This is because, in such situations, unexpected changes in the world that would force a reconsideration of the actor's intentions can usually be safely ignored. 6. G intends each act in II to play a role in his plan. The notion of an act playing a role in a plan is defined in terms of two relationships over acts: generation, in the sense defined by Goldman [9], and enablement. Roughly, one act generates another if, by performing the first, the agent also does the second; thus, saying to Kathy "How are you doing?" may generate asking her how she is feeling. Or, to take an example from the computer-mail domain, typing DEL . at the prompt for a computer mail system may generate deleting the current message, which may in turn generate cleaning out one's mail file. In contrast, one act enables the generation of a second by a third if the first brings about circumstances that are necessary for the generation. Thus, typing HEADER 15 may enable the generation of deleting the fifteenth message by typing DEL., because it makes message 15 be the current message, to which '.' refers, s The difference between gener- ation and enablement consists largely in the fact that, when an act a generates an act ~, the agent need only do a, and will automatically be done also. However, when a enables the generation of some "1 by fl, the agent needs to do something more than just a to have done either fl or "t. In this paper, I consider only the inference of a restricted subset of plans, which I shall call simple plans. An agent has a simple plan if and only if he believes that all the acts in that plan play a role in it by generating another act; i.e., if it includes no acts that he believes are related to one another by enablement. It is important to distinguish between types of actions (act- types), such as typing DEL., and actions themselves, such as my typing DE/.. right now. Actions or acts--I will use the two terms interchangeahly--can be thought of as triples of act.type, agent, and time. Generation is a relation over actions, not over act-types. Not every case of an agent typing DEL • will result in the agent deleting the current message; for example, my typing it just now did not, because I was not typing it to a computer mail system. Similarly, executability-- the relation expressed in Clause (1) of (P0) as "can execute"-- applies to actions, and the objects of an agent's intentions are, in this model, also actions. Using the representation language specified in my thesis [14], which builds upon Allen's interval-based temporal logic [3], the conditions on G's having a simple plan to do fl can be encoded as follows: (P1) SIMPLE-PLAN(G ,a~,[a~,..., a~-i 1,t2, tl )~ (i) BEL(G,EXEC(ai,G,t2),tl), for i = 1 ..... n A (ii) BEL(G,GEN(ai, cq+I,G,t2),tl), for i = 1 .... ,n-1 A (iii) INT(G,al, t2,tl), for i = 1 ..... n A (iv) INT(G,by(ai, ai+l), t2,tl), for i = 1 .... ,n-1 The left-hand side of (P1) denotes that the agent G has, at time tl, a simple plan to do an, consisting of doing the set of acts {el,..., an-l} at t2. Note that all these are simultaneous acts; this is a consequence of the restriction to simple plans. The right-hand side of (P1) corresponds directly to (PO), ex- cept that, in keeping with the restriction to simple plans, spe- cific assertions about each act generating another replace the SEnablement here thus differs from the usual binary relation in which one action enables another. Since this paper does not further consider plans with enabling actions, the advantages of the alternative definition will not be discussed. 208 more general statement regarding the fact that each act plays a role in the plan. The relation BEL(G,P,t) should be taken to mean that agent G believes proposition P throughout time interval t; INT(G,a, tz,tl) means that at time tl G intends to do a at t2. The relation EXEC(a,G,t) is true if and only if the act of G doing a at t is ezecutable, and the relation GEN(a,//,G,t) is true if and only if the act of G doing a at t generates the act of G doing// at t. The function by maps two act-type terms into a third act-type term: if an agent G intends to do by(a,//), then G intends to do the complex act //-by-a, i.e., he intends to do a in order to do//. Further dis- cussion of these relations and functions can be found in Pollack [14, Chap. 4]. Clause (i) of (P1) captures clause (1) of (P0). 4 Clause (iS) of (P1) captures both clauses (2) and (3) of (P0): when i takes the value n-l, clause (iS) of (P1) captures the requirement, stated in clause (2) of (P01, that G believes his acts will entail his goal; when i takes values between 1 and n-2, it captures the requirement of clause (3) of (P0), that G believes each of his acts plays a role in his plan. Similarly, clause (iii) of (Pl) captures clause (4) of (P0), and clause (iv) of (P1) captures clauses (5) and (6) of (PO). (P1) can be used to state what it means for an actor to have an invalid simple plan: G has an invalid simple plan if and only if he has the configuration of beliefs and intentions listed in (P1), where one or more of those beliefs is incorrect, and, consequently, one or more of the intentions is unrealizable. The correctness of the actor's beliefs thus determines the validity of his plan: if all the beliefs that are part of his plan are correct, then all the intentions in it are realizable, and the plan is valid. Validity in this absolute sense, however, is not of primary concern in modeling plan inference in conversation. What is important here is rather the observer's judgement of whether the actor's plan is valid. It is to the analysis of such invalidity judgements, and their effect on the question- answering process, that we now turn. PLAN INFERENCE IN QUESTION-ANSWERING Models of the question-answering process often include a claim that the respondent (R) must infer the plans of the questioner (Q). So R is the observer, and Q the actor. Building on the analysis of plans as mental phenomena, we can say that, if R believes that she has inferred Q's plan, there is some set of be- liefs and intentions satisfying (P1) that R believes Q has (or is at least likely to have). Then there are particular discrepancies that may arise between the beliefs that R ascribes to Q when she believes he has some plan, and the beliefs that R herself holds. Specifically, R may not herself believe one or more of the beliefs, corresponding to Clauses (i) and (iS) of (P1), that she ascribes to Q. We can associate such discrepancies with 41n fact, it captures more: to encode Clause (i) of (P0), the pacameter 1 in Clause (i) of (PI) need only vary between I and n-l. However, given the relationship between EXEC and GEN specified in Pollack [t4], namely EX EC(a, G, t) A GEN (a, ~, G, t) ~ EXEC(~, G, t) the instance of Clause (i) of (P1) with i=n is a consequence of the instance of Clause (i) with i=n-1 and the instance of Clause (iS) with i=n-l. A similar argument can be made about Clause (iii). R's judgement that the plan she has inferred is invalid, s The type of any invalidities, defined in terms of the clauses of (PI) that contain the discrepant beliefs, can be shown to influence the content of a cooperative response. However, they do not fully determine it: the plan inferred to underlie a query, along with any invalidities it is judged to have, are but two factors affecting the response-generation process, the most significant others being factors of relevance and salience. I will illustrate the effect of invalidity judgements on re- sponse content with a query of the form "I want to perform an act of ~, so I need to find out how to perform an act of a," in which the goal is explicit, as in example (1) below°: (I) "I want to prevent Tom from reading my mail file. How can I set the permissions on it to faculty-read only? ~ In questions in which no goal is mentioned explicitly, analysis depends upon inferring a plan leading to a goal that is rea- sonable in the domain situation. Let us assume that, given query (1), R has inferred that Q has the simple plan that con- sists only in setting the permissions to faculty-read only, and thereby directly preventing Tom from reading the file, i.e.: (2)BEL(R,SIMPLE-PLAN(Q, prevent (mmfile,read,tom), [set-permissions(mmfile,read,faeult y)], t2, tl), tz) Later in this paper, I will describe the process by which R can come to have this belief. Bear in mind that, by (P1), (2) can be expanded into a set of beliefs that R has about Q's beliefs and intentions. The first potential discrepancy is that R may believe to be false some belief, corresponding to Clause (i) of (PI), that, by virtue of (2), she ascribes to Q. In such a case, I will say that she believes that some action in the inferred plan is un- e=~utable. Examples of responses in which R conveys this information are (3) (in which R believes that at least one in- tended act is unexecutable) and (4) (in which R believes that at least two intended acts are unexeeutable): (3) "There ia no way for you to set the permissions on a tile to faculty-read only. What you can do is move it into a password- protected subdirectory; that will prevent Tom from reading it." (4) "There is no way far you to set the permissions on a file to faculty.read only, nor is there any way for you to prevent Tom from reading it." SThle auumee that R always believes that her own beliefs are complete and correct. Such an usumption is not an unreasonable one for question- answering systems to make. More general conversational systems must abandon this usumption, sometimes updating their own beliefs upon de- tecting a discrepancy. eThe analysis below is related to that provided by 2oshi, Webber, and Weischedel [10}. There are significant differences in my approach, how- ever, which involve (i) a different structural analysis, which applies ane=- scala6111lll to agtions rather than plans and introduces incoherence (this latter notion I dellne in the next section); (ii) a claim that the types of invtlldlties (e.g., formedness, executability of the queried action, and ex- ecutsbility of a goal action) are independent of one another; and (iii) a claim that recognition of any invalidities, while necessary for determining what information to include in an appropriate response, is not in itself sufficient for this purpose. Also, Joshi et el. do not consider the question of how invalid plans can be inferred. 209 The discrepancy resulting in (3) is represented in (5); the dis- crepancy in (4) is represented in (5) plus (6): (5) BEL(R,B EL(Q,EXEC(set-permissions(mmfile,read,facult y), Q,tz), tl), t~) A BEL(R,-,EXEC(set-permissions(mmfile,read,facult y), Q,t2), t~) (6) BEL(R,BEL(Q,EXEC(prevent(mmfile,read,tom), Q,t2), tl), ti) A BEL(R,--EXgC(prevent (ram file,read,tom), Q,t2), h) The second potential discrepancy is that R may believe false some belief corresponding to Clause (ii) of (P1) that, by virtue of (2), she ascribes to Q. I will then say that she believes the plan to be ill-formed. In this ease, her response may con~'ey that the intended acts in the plan will not fit together as ex- pected, as in (7), which might be uttered if R believes it to be mutually believed by R and Q that Tom is the system man- ager: (7) "Well, the command is SET PROTECTION ---- (Fac- ulty:Read), but that won't keep Tom out: file permissions don't apply to the system manager." The discrepancy resulting in (7) is (8): (8)BEL(R,BEL(Q,GEN(set-permissions(mmfile,read,facult y), prevent (ram file,read,tom), Q,t2), tl), h) A BEL(R,-~G EN (set-permissions(mmfile,read,facult y), prevent (mmfile,read,tom), Q,t2), h) Alternatively, there may be some combination of these dis- crepancies between R's own beliefs and those that R attributes to Q, as reflected in a response such as (9): (9) "There is no way for you to set the permissions to faculty- read only; and even if you could, it wouldn't keep Tom out: tile permissions don't apply to the system manager." The discrepancies encoded in (5) and (8) together might result in (9). Of course, it is also possible that no discrepancy exists at all, in which ease I will say that R believes that Q's plan is valid. A response such as (10) can be modeled as arising from an inferred plan that R believes valid: (10) "Type SET PROTECTION = (Faculty:Read)." Of the eight possible combinations of formedness, exe- curability of the queried act and executability of the goal act, seven are possible: the only logically incompatible combina- tion is a well-formed plan with an executable queried act, but unexecutable goal act. This range of invalidities accounts for a great deal of the information conveyed in naturally occurring dialogues. But there is an important regularity that the PI model does not yet explain. A PROBLEM FOR PLAN INFERENCE In all of the preceding cases, R has intuitively "made sense" of Q's query, by determining some underlying plan whose com- ponents she understands, though she may also believe that the plan is flawed. For instance in (7), R has determined that Q may mistakenly believe that, when one sets the permissions on a file to allow a particular access to a particular group, no one who is not a member of that group can gain access to the file. This (incorrect) belief explains why Q believes that setting the permissions will prevent Tom from reading the file. There are also cases in which R may not even be able to "make sense" of Q's query. As a somewhat whimsical example, imagine Q saying: (11) ~I want to talk to Kathy, so I need to Fred out how to stand on my head. ~ In many contexts, a perfectly reasonable response to this query is ~Huh? ~. Q's query is incoherent: R cannot understand why Q believes that finding out how to stand on his head (or stand- ing on his head) will lead to talking with Kathy. One can, of course, construct scenarios in which Q's query makes perfect sense: Kathy might, for example, be currently hanging by her feet in gravity boots. The point here is not to imagine such circumstances in which Q's query would be coherent, but in- stead to realize that there are many circumstances in which it would not. The judgement that a query is incoherent is not the same as a judgement that the plan inferred to underlie it is ill-formed. To see this, contrast example (11) with the following: (12) al want to talk to Kathy. Do you know the phone number at the hospital?" Here, if R believes that Kathy has already been discharged from the hospital, she may judge the plan she infers to underlie Q's query to be ill-formed, and may inform him that calling the hospital will not lead to talking to Kathy. She can even inform him why the plan is ill-formed, namely, because Kathy is no longer at the hospital. This differs from (11), in which R cannot inform Q of the reason his plan is invalid, because she cannot, on an intuitive level, even determine what his plan is. Unfortunately, the model as developed so far does not dis- tinguish between incoherence and ill-formedness. The reason is that, given a reasonable account of semantic interpretation, it is transparent from the query in (11) that Q intends to talk to Kathy, intends to find out how to stand on his head, and intends his doing the latter to play a role in his plan to do the former and that he also believes that he can talk to Kathy, believes that he can find out how to stand on his head, and believes that his doing the latter will play a role in his 210 plan to do the former. ~ But these beliefs and intentions are precisely what are required to have a plan according to (P0). Consequently, after hearing (11), R can, in fact, infer a plan underlying Q's query, namely the obvious one: to find out how to stand on his head (or to stand on his head) in order to talk to Kathy. Then, since R does not herself believe that the for- mer act will lead to the latter, on the analysis so far given, we would regard R as judging Q's plan to be ill-formed. But this is not the desired analysis: the model should instead capture the fact that R cannot make sense of Q's query here--that it is incoherent. Let us return to the set of examples about setting the per- missions on a file, discussed in the previous section. In her se- mantic interpretation of the query in (1), R may come to have a number of beliefs about Q's beliefs and intentions. Specifi- cally, all of the following may be tr~e: (13) BEL(R,BgL(Q,gXEC(set-permissions(mmfile,read,facult y), q,tz), tl), t~) (14) BEL(R,BEL(Q,gXEC(prevent(mmfile,read,tom), Q,t2), tl), t~) (15) BEL(R,BEL(Q,G EN(set-permissions(mmfile,read,facult y), prevent (mmfile,read,tom), Q,tz), tl), t~) (16) BEL( R,I NT(Q,set-permissions(mm file,read,facult y), t2,~l), tt) (17) BEL(R,INT(Q,prevent (mmfile,read,tom), t2,tl), t~) (18) BEL(R,I iT(Q,by(set-permissions(mmfile,read,facult y), prevent (mmfile,read,tom)), t2,tl), tl) Together, (13)-(18) are sumcient for R's believing that Q has the simple plan as expressed in (2). This much is not surpris- ing. In effect Q has stated in his query what his plan is--to prevent Tom from reading the file by setting the permission on it to faculty-read only--so, of course, R should be able to infer just that. And if R further believes that the system manager can override file permissions and that Tom is the system man- ager, but also that Q does not know the former fact, R will judge that Q's plan is ill-formed, and may provide a response such as that in (7). There is a discrepancy here between the belief R ascribes to Q in satisfaction of Clause (ii) of (Pl)-- namely, that expressed in (15)--and R's own beliefs about the domain. But what if R, instead of believing that it is mutually be- lieved by Q and R that Tom is the system manager, believes that they mutually believe that he is a faculty member? In this case, (13)-(18) may still be true. However we do not want to say that this case is indistinguishable from the previous one. 7Actually, the requirement that Q have these beliefs may be slightly too strong; see Pollack [14, Chap. 3] for discussion. In the previous case, R understood the source of Q's erroneous belief: she realized that Q did not know that the system man- ager could override file protections, and therefore thought that, by setting permissions to restrict access to a group that Tom is not a member of, he could prevent Tom from reading the file. In contrast, in the current ease, R cannot really understand Q's plan: she cannot determine why Q believes that he will prevent Tom from reading the file by setting the permissions on it to faculty-read only, given that Q believes that Tom is a faculty member. This current case is like the case in (11): Q's query is incoherent to R. To capture the difference between iil-formedness and inco- herence, I will claim that, when an agent R is asked a question by an actor Q, R needs to attempt to ascribe to Q more than just a set of beliefs and intentions satisfying (Pl). Specifi- cally, for each belief satisfying Clause (ii) of (Pl), R must also ascribe to Q another belief that explains the former in a cer- tain specifiable way. The beliefs that satisfy Clause (ii) are beliefs about the relation between two particular actions: for instance, the plan underlying query (12) includes Q's belief that his action of calling the hospital at tz will generate his action of establishing a communication channel to Kathy at t2. This belief can be explained by a belief Q has about the relation between the act-types ~calling a location" and ~estab- lishing a communication channel to an agent." Q may believe that sets of the former type generate acts of the latter type provided that the agent to whom the communication channel is to be established is at the location to be called. Such a belief can be encoded using the predicate CGEN, which can be read "conditionally generates," as follows: (19)BEL(Q, CGEN(call(X),establish-channel(Y),at(X,Y)), tl) The relation CGEN(a, B, C) is true if and only if acts of type a performed when condition C holds will generate acts of type #. Thus, the sentence CGEN(a, B, C) can be seen as one possible interpretation of a hieran=hical planning operator with header B, preconditions C, and body a. Conditional generation is a relation between two act-types and a set of conditions; gener- ation, which is a relation between two actions, can be defined in terms of conditional generation. In reasoning about (12), R can attribute to Q the belief ex- pressed in (19), combined with a belief that Kathy will be at the hospital at time t2. Together, these beliefs explain Q's be- lief that, by calling the hospital at t2, he will establish a com- mtmieation channel to Kathy. Similarly, in reasoning about query (1) in the case in which R does not believe that Q knows that Tom is a faculty member, R can ascribe to Q the beliefs that, by setting the permissions on a file to restrict access to a partieulac group, one denies access to everyone who is neither s member of that group nor the system manager, as expressed in (20): (20)BEL(R,BEL(Q,CGEN(set-permissions(X,P,Y), prevent(X,P,Z), -,member(g,Y)), h), tt) She can also ascribe to Q the belief that Tom is not a mem- ber of the faculty, (or more precisely, that Tom will not be a member of the faculty at the intended performance time tz), i.e., 211 • I (21)BEL(R,BEL(Q, HOLDS(-~member(tom,facuity),t2),tl),tl} The conjunction of these two beliefs explains Q's further belief, expressed in (15), that, by setting the permissions to faculty- read only at t2, he can prevent Tom from reading the file. In contrast, in example (11), R has no basis for ascribing to Q beliefs that will explain why he thinks that standing on his head will lead to talking with Kathy. And, in the version of example (1) in which R believes that Q believes that Tom is a faculty member, R has no basis for ascribing to Q a belief that explains Q's belief that setting the permissions to faculty-read only will prevent Tom from reading the file. Explanatory beliefs are incorporated in the PI model by the introduction of ezplanatory plans, or eplans. Saying that an agent R believes that another agent Q has some eplan is short- hand for describing a set of beliefs possessed by R, specifically: (P2) (R,EPLAN(Q,~n,[al ..... an-l],[pl ..... Pn-l], t2, tl),tl ) (i) BEL(R,BEL(Q,EXEC(cq,Q,t2),tl),tl), for i = 1,...,n A (ii) BEL(R,BEL(Q,G EN(~, ai+t,Q,t2),tt),tl ), for i = 1,...,n-I A (iii) BEL(R,INT(Q,~I, tz, tl),tl), for i = 1,..., n A (iv) BEL(R,INT(Q,by~al, ai+l), t2, tl),tl), for i = 1,... ,n-1 A (v) BEL(R,BEL(Q,pi, tl),tl), where each Pi is CGEN(ai, cq+l, Ci) A HOLDS(Ci, t2) I claim that the PI process underlying cooperative question- answering can be modeled as an attempt to infer an eplan, i.e., to form a set of beliefs about the questioner's beliefs and intentions that satisfies (P2). Thus the next question to ask is: how can R come to have such a set of beliefs? THE INFERENCE PROCESS In the complete PI model, the inference of an eplan is a two- stage process. First, R infers beliefs and intentions that Q plausibly has. Then when she has found some set of theme that is large enough to account for Q's query, their epistemie status can be upgraded, from beliefs and intentions that R be- lieves Q plausibly has, to beliefs and intentions that R will, for the purposes of forming her response, consider Q actually to have. Within this paper, however, I will blur the distinction between attitudes that R believes Q plausibly has and atti- tudes that R believes Q indeed has; in consequence I will also omit discussion of the second stage of the PI process. A set of plan inference rules encodes the principles by which an inferring agent R can reason from some set of beliefs and intentions--call this the antecedent eplan--that she thinks Q has, to some further set of beliefs and intentions--call this the consequent eplan--that she also thinks he has. The beliefs and intentions that the antecedent eplan comprises are a proper subset of those that the consequent eplan comprises. To reason from antecedent eplan to consequent eplan, R must attribute some explanatory belief to Q on the basis of something other than just Q's query. In more detail, if part of R's belief that Q has the antecedent eplan is a belief that Q intends to do some act a, and R has reason to believe that Q believes that act-type a conditionally generates act-type 3' under condition C, then R can infer that Q intends to do a in order to do % believing as well that C will hold at performance time. R can also reason in the other direction: if part of her belief that Q has some plausible eplan is a belief that Q intends to do some act a and R has reason to believe that Q believes that act-type conditionally generates act-type a under condition C, then R can infer that Q intends to do "~ in order to do a, believing that C will hold st performance time. The plan inference rules encode the pattern of reasoning ex- pressed in the last two sentences. Different plan inference rules encode the different bases upon which R may decide that Q may believe that a conditional generation relation holds be- tween some a, an act of which is intended as part of the an- tecedent eplan, and some % This ascription of beliefs, as well as the ascription of intentions, is a nonmonotonic process. For arbitrary proposition P, R will only decide that Q may believe that P if R has no reason to believe Q believes that -~P. In the most straightforward case, R will ascribe to Q a be- lief about s conditional generation relation that she herself believes true. This reasoning can be encoded in the represen- tation language in rule (PI1): (PII) BEL(R,EPLAN(Q,an,[al ..... an-a],[pl ..... On-t], t2,h),h) A BEL(R,CGEN(an, % C),q) BEL( R,EPLAN(Q,%[al ..... a,],[pl ..... p, ],t2, tl ),tl ) where p, ~. CGEN(ar,,"I, C) ^ HOLDS(C, t2) This rule says that, if R's belief that Q has some eplan includes a belief that Q intends to do an act an, and R also believes that act-type a~ conditionally generates some "~ under condition C, then R can (nonmonotonically) infer that Q has the additional intention of doing a, in order to do ~--i.e., that he intends to do by(an, "~). Q's having this intention depends upon his also having the supporting belief that a n conditionally generates ~' under some condition C, and the further belief that this C will hold at performance time. A rule symmetric to (PI1) is also needed since R can not only reason about what acts might be generated by an act that she already believes Q intends, but also about what acts might generate such an act. Consider R's use of (PI1) in attempting to infer the plan underlying query (1)) R herself has a particular belief about the relation between the act-types "setting the permissions on • file" and "preventing someone access to the file," a belief we can encode as follows: (22) BEL{ R,CG EN (met-permissions(X,P,Y), prevent(X,P,Z), -~member(Z,Y) A--system-mgr(Z)), q) From query (1}, R can directly attribute to Q two trivial eplans: sI have simplified somewhat in the following account for presentational purposes. A step-by-step account of this inference process is given in Poll~ck [14, Chap. 6]. 212 ( 23 ) B E L( R, E P b A N ( Q,set-p ermissions( mmfile,read,facult y ), [ ],t2, t,), tl) (24)BEL(R,EPLAN(Q,prevent(mmfile,read,tom),[ ],t2, tl ), tl) The belief in (23) is justified by the fact that (13) satisfies Clause (i) of (P2), (16) satisfies Clause (iv) of (P2), and Clauses (ii), (iii), and (v) are vacuously satisfied. An anal- ogous argument applies to (24). Now, if R applies (PII), she will attribute to Q exactly the same belief as she herself has, as expressed in (22), along with a belief that the condition C specified there will hold at t2. That is, as part of her belief that a particular eplan underlies (1), R will have the following belief: (25) BEL(R,BEL(Q,CG EN(set-permissions(X,P,Y), prevent(X,P,Z), -,member(Z,Y) A -~system-mrg(Z)) A HOLDS(-,member(tom,faeulty) A --system-mgr(tom), tz), tl), q) The belief that R attributes to Q, as expressed in (25), is an explanatory belief supporting (15). Note that it is not the same explanatory belief that was expressed in (20) and (21). In (25), the discrepancy between R's beliefs and R's beliefs about Q's beliefs is about whether Tom is the system manager. This discrepancy may result in a response like (26), which conveys different information than does (7} about the source of the judged ill-formedness. (26) "Well, the command is SET PROTECTION = (Fac- ulty:Read), but that won't keep Tom out: he's the system manager." (PI1) (and its symmetric partner) are not sufficient to model the inference of the eplan that results in (7). This is because, in using (PI1), R is restricted to ascribing to Q the same beliefs about the relation between domain act-types as she herself has. ~ The eplan that results in (7) includes a belief that R attributes to Q involving a relation between act-types that R believes false, specifically, the CGEN relation in (20). What is needed to derive this is a rule such as (PI2): (PI2) BEL(R,EPLAN(Q,on,[al ..... an-l],[pl ..... Pn-l], t2, t,),q ) A BEL(R,CGEN(an, 7, C~ A... A Cm),tl) --4 BEL(R,EPLAN(Q,7,[al,..., a,],[pl ..... p,],tz, q ),q ) where p, = CGEN(an, % CIA...ACi-IACi+IA...ACm)A HOLDS(C, A... A Ci-1 A Ci+l h ...A Cm,t2) ~Hence, existing PI systems that equate R's and Q's beliefs about actions could, in principle, have handled examples such as (26) which r,: ~:Sre only the use of (PI1), although they have not done so. Further, whi]~ they could have handled the particular type of invalidity that can be inferred using (PII), without an analysis of the general problem of invalid plans and their effects on cooperative responses, these systems would need to treat this as a special case in which a variant response is required. What (PI2) expresses is that R may ascribe to Q a belief about a relation between act-types that is a slight variation of one she herself has. What (PI2) asserts is that, if there is some CGEN relation that R believes true, she may attribute to Q a belief in a similar CGEN relation that is stronger, in that it is missing one of the required conditions. If R uses (PI2) in attempting to infer the plan that underlies query (1), she may decide that Q's belief about the conditions under which setting the permissions on a file prevents someone from accessing the file do not include the person's not being the system manager. This can result in R attributing to Q the explanatory belief in (20) and (21), which, in turn, may result in a response such as that in (7). Of course, both the kind of discrepancy that may be in- troduced by (PI1) and the kind that is always introduced by (PI2) may be present simultaneously, resulting in a response like (27): (27) "Well, the command is SET PROTECTION = (Fac- ulty:Read), but that won't keep Tom out: he's the system manager, and file permissions don't apply to the system man- ager." (PI2) represents just one kind of variation of her own beliefs that R may consider attributing to Q. Additional PI rules encode other variations and can also be used to encode any typical misconceptions that R may attribute to Q. IMPLEMENTATION The inference process described in this paper has been imple- mented in SPIRIT, a System for Plan Inference that Reasons about Invalidities Too. SPIRIT infers and evaluates the plans underlying questions asked by users about the domain of com- puter mail. It also uses the result of its inference and eval- uation to generate simulated cooperative responses. SPIRIT is implemented in C-Prolog, and has run on several differ- ent machines, ineludinga Sun Workstation, a Vax 11-750, and a DEC-20. SPIRIT is a demonstration system, imple- mented to demonstrate the PI model developed in this work; consequently only a few key examples, which are sufficient to demonstrate SPIRIT's capabilities, have been implemented. Of course, SPIRIT's knowledge base could be expanded in a straightforward manner. SPIRIT has no mechanisms for com- puting relevance or salience and, consequently, always pro- duces as complete an answer as possible. CONCLUSION In this paper I demonstrated that modeling cooperative con- versation, in particular cooperative question-answcring, re- quires a model of plan inference that distinguishes between the beliefs of actors and those of observers. I reported on such a model, which rests on an analysis of plans as mental phenom- ena. Under this analysis there can be discrepancies between an agent's own beliefs and the beliefs that she ascribes to an actor when she thinks he has some plan. Such discrepancies were as- sociated with the observer's judgement that the actor's plan is invalid. Then the types of any invalidities judged to be present in a plan inferred to underlie a query were shown to affect the content of a cooperative response. 1 further suggested that, to 213 guarantee a cooperative response, the observer must attempt to ascribe to the questioner more than just a set of beliefs and intentions sufficient to believe that he has some plan: she must also attempt to ascribe to him beliefs that explain those beliefs and intentions. The eplan construct was introduced to capture this requirement. Finally, I described the process of inferring eplans--that is, of ascribing to another agent beliefs and in- tentions that explain his query and can influence a response to it. REFERENCES [1] James F. Allen. A Plan Based Approach to Speech Act Recognition. Technical Report TR 121/79, University of Toronto, 1979. [2] James F. Allen. Recognizing intentions from natural language utterances. In Michael Brady and Robert C. Berwlck, editors, Computational Models of Discourse, pages 107-166, MIT Press, Cambridge, Mass., 1983. [3] James F. Allen. Towards a general theory of action and time. Artificial Intelligence, 23(2):123-154, 1984. [4] Michael Bratman. Intention, Plans and Practical Reason. Harvard University Press, Cambridge, Ma., forthcoming. [5] Michael Bratman. Taking plans seriously. Social Theory and Practice, 9:271-287, 1983. [6] M. Sandra Carberry. Pragmatic Modeling in Information System Interfaces. PhD thesis, University of Delaware, 1985. [7] Philip R. Cohen and Hector J. Levesque. Speech acts and rationality. In Proceedings of the e3rd Conference of the Association for Computational Linguistics, pages 49-59, Stanford, Ca., 1985. [8] R. E. Fikes and Nils J. Nilsson. Strips: a new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2:189-208, 1971. [9] Alvin I. Goldman. A Theory of Human Action. Prentice- Hail, Englewood Cliffs, N.J., 1970. [10] Aravind K. Joshi, Bonnie Webber, and Ralph Weischedel. Living up to expectations: computing expert responses. In Proceedings of the Fourth National Conference on Ar- tificial Intelligence, pages 169-175, Austin, Tx., 1984. [11] Diane Litman. Plan Recognition and Discourse Analy- sis: An Integrated Approach for Understanding Dialogues. PhD thesis, University of Rochester, 1985. [12] Edwin P.D. Pednault. Preliminary Report on a Theory of Plan Synthesis. Technical Report 358, SRI International, 1985. [13] C. Raymond Perranlt and James F. Allen. A plan-based analysis of indirect speech acts. American Journal of Computational Linguistics, 6:167-182, 1980. [14] Martha E. Pollack. Inferring Domain Plans in @~estion- Answering. PhD thesis, University of Pennsylvania, 1986. [15] Earl D. Sacerdoti. A Structure for Plans and Behavior. American Elsevier, New York, 1977. [16] Candaee L. Sidner. Plan parsing for intended response recognition in discourse. Computational Intelligence, I(I), 1985. [17] Candace L. Sidner. What the speaker means: the recogni- tion of speakers' plans in discourse. International Journal of Computers and Mathematics, 9:71-82, 1983. [18] David E. Wilkins. Domain-independent planning: rep- resentation and plan generation. Artificial Intelligence, 22:269--301, 1984. 214
1986
32
LINGUISTIC COHERENCE: A PLAN-BASED ALTERNATIVE Diane J. Litman AT&T Bell Laboratories 3C-408A 600 Mountain Avenue Murray Hill, NJ 079741 ABSTRACT To fully understand a sequence of utterances, one must be able to infer implicit relationships between the utterances. Although the identification of sets of utterance relationships forms the basis for many theories of discourse, the formalization and recogni- tion of such relationships has proven to be an extremely difficult computational task. This paper presents a plan-based approach to the representation and recognition of implicit relation- ships between utterances. Relationships are formu- lated as discourse plans, which allows their representa- tion in terms of planning operators and their computa- tion via a plan recognition process. By incorporating complex inferential processes relating utterances into a plan-based framework, a formalization and computa- bility not available in the earlier works is provided. INTRODUCTION In order to interpret a sequence of utterances fully, one must know how the utterances cohere; that is, one must be able to infer implicit relationships as well as non-relationships between the utterances. Con- sider the following fragment, taken from a terminal transcript between a user and a computer operator (Mann [12]): Could you mount a magtape for me? It's tape 1. Such a fragment appears coherent because it is easy to infer how the second utterance is related to the first. Contrast this with the following fragment: Could you mount a magtape for me? It's snowing like crazy. This sequence appears much less coherent since now there is no obvious connection between the two utter- ances. While one could postulate some connection (e.g., the speaker's magtape contains a database of places to go skiing), more likely one would say that there is no relationship between the utterances. Furth- IThis work was done at the Department of Computer Sci- ence. University of Rochester. Rochester NY 14627. and support- ed in part by DARPA under Grant N00014-82-K-0193. NSF under Grant DCR8351665. and ONR under Grant N0014-80-C-0197. ermore, because the second utterance violates an expectation of discourse coherence (Reichman [16]. Hobbs [8], Grosz, Joshi, and Weinstein [6]), the utter- ance seems inappropriate since there are no linguistic clues (for example, prefacing the utterance with "incidentally") marking it as a topic change. The identification and specification of sets of linguistic relationships between utterances 2 forms the basis for many computational models of discourse (Reichman [17], McKeown [14], Mann [13], Hobbs [8], Cohen [3]). By limiting the relationships allowed in a system and the ways in which relationships coherently interact, efficient mechanisms for understanding and generating well organized discourse can be developed. Furthermore, the approach provides a framework for explaining the use of surface linguistic phenomena such as clue words, words like "incidentally" that often correspond to particular relationships between utter- ances. Unfortunately. while these theories propose relationships that seem intuitive (e.g. "elaboration," as might be used in the first fragment above), there has been little agreement on what the set of possible rela- tionships should be, or even if such a set can be defined. Furthermore, since the formalization of the relationships has proven to be an extremely difficult task, such theories typically have to depend on unrealistic computational processes. For example. Cohen [3] uses an oracle to recognize her "evidence" relationships. Reichman's [17] use of a set of conver- sational moves depends on the future development of extremely sophisticated semantics modules. Hobbs [8] acknowledges that his theory of coherence relations "may seem to be appealing to magic," since there are several places where he appeals to as yet incomplete subtheories. Finally, Mann [13] notes that his theory of rhetorical predicates is currently descriptive rather than constructive. McKeown's [14] implemented sys- tem of rhetorical predicates is a notable exception, but since her predicates have associated semantics expressed in terms of a specific data base system the approach is not particularly general. -'Although in some theories relationships hold between group of utterances, in others between clauses of an utterance, these distinctions will not be crucial for the purposes of this paper. 215 This paper presents a new model for representing and recognizing implicit relationships between utter- ances. Underlying linguistic relationships are formu- lated as discourse plans in a plan-based theory of dialogue understanding. This allows the specification and formalization of the relationships within a compu- tational framework, and enables a plan recognition algorithm to provide the link from the processing of actual input to the recognition of underlying discourse plans. Moreover, once a plan recognition system incorporates knowledge of linguistic relationships, it can then use the correlations between linguistic rela- tionships and surface linguistic phenomena to guide its processing. By incorporating domain independent linguistic results into a plan recognition framework, a formalization and computability generally not avail- able in the earlier works is provided. The next section illustrates the discourse plan representation of domain independent knowledge about communication as knowledge about the planning process itself. A plan recognition process is then developed to recognize such plans, using linguistic clues, coherence preferences, and constraint satisfac- tion. Finally, a detailed example of the processing of a dialogue fragment is presented, illustrating the recognition of various types of relationships between utterances. REPRESENTING COHERENCE USING DISCOURSE PLANS In a plan-based approach to language understand- ing, an utterance is considered understoo~ when it has been related to some underlying plan of the speaker. While previous works have explicitly represented and recognized the underlying task plans of a given domain (e.g., mount a tape) (Grosz [5], Allen and Per- rault [1], Sidner and Israel [21]. Carberry [2], Sidner [24]), the ways that utterances could be related to such plans were limited and not of particular concern. As a result, only dialogues exhibiting a very limited set of utterance relationships could be understood. In this work, a set of domain-independent plans about plans (i.e. meta-plans) called discourse plans are introduced to explicitly represent, reason about, and generalize such relationships. Discourse plans are recognized from every utterance and represent plan introduction, plan execution, plan specification, plan debugging, plan abandonment, and so on. indepen- dently of any domain. Although discourse plans can refer to both domain plans or other discourse plans. domain plans can only be accessed and manipulated via discourse plans. For example, in the tape excerpt above "Could you mount a magtape for me?" achieves a discourse plan to introd,we a domain plan to mount a tape. "It's tape 1" then further specifies this domain plan. Except for the fact that they refer to other plans (i.e. they take other plans as arguments), the represen- tation of discourse plans is identical to the usual representation of domain plans (Fikes and Nilsson [4], Sacerdoti [18]). Every plan has a header, a parameter- ized action description that names the plan. Action descriptions are represented as operators on a planner's world model and defined in terms of prere- quisites, decompositions, and effects. Prerequisites are conditions that need to hold (or to be made to hold) in the world model before the action operator can be applied. Effects are statements that are asserted into the world model after the action has been successfully executed. Decompositions enable hierarchical plan- ning. Although the action description of. the header may be usefully thought of at one level of abstraction as a single action achieving a goal, such an action might not be executable, i.e. it might be an abstract as opposed to primitive action. Abstract actions are in actuality composed of primitive actions and possibly other abstract action descriptions (i.e. other plans). Finally, associated with each plan is a set of applica- bility conditions called constraintsJ These are similar to prerequisites, except that the planner never attempts to achieve a constraint if it is false. The plan recognizer will use such general plan descriptions to recognize the particular plan instantiations underlying an utterance. HEADER: < "7 DECOMPOSITION: EFFECTS: CONSTRAINTS: INTRODUCE-PLAN(speaker. hearer action, plan) REQUEST(speaker. hearer, action) WANT(hearer. plan) NEXT(action. plan) STEP(action, plan) AGENT(action. hearer) Figure 1. INTRODUCE-PLAN. Figures 1, 2, and 3 present examples of discourse plans (see Litman [10] for the complete set). The first discourse plan, INTRODUCE-PLAN, takes a plan of the speaker that involves the hearer and presents it to the hearer (who is assumed cooperative). The decom- position specifies a typical way to do this, via execu- tion of the speech act (Searle [19]) REQUEST. The constraints use a vocabulary for referring to and describing plans and actions to specify that the only actions requested will be those that are in the plan and have the hearer as agent. Since the hearer is assumed cooperative, he or she will then adopt as a goal the 3These constraints should not be confused with the con- straints of Stefik [25]. which are dynamical b formulated during hierarchical plan generation and represent the interactions between subprobiems. 216 joint plan containing the action (i.e. the first effect). The second effect states that the action requested will be the next action performed in the introduced plan. Note that since INTRODUCE-PLAN has no prere- quisites it can occur in any discourse context, i.e. it does not need to be related to previous plans. INTRODUCE-PLAN thus allows the recognition of topic changes when a previous topic is completed as well as recognition of interrupting topic changes (and when not linguistically marked as such, of incoherency) at any point in the dialogue. It also cap- tures previously implicit knowledge that at the begin- ning of a dialogue an underlying plan needs to be recognized. HEADER: PREREQUISITES: DECOMPOSITION: EFFECT: CONSTRAINTS: CONTINUE-PLAN(speaker, hearer, step nextstep, plan) LAST(step. plan) WANT(hearer. plan) REQUEST(speaker. hearer, nextstep) NEXT(nextstep. plan) STEP(step. plan) STEP(nextstep. plan) AFTER(step. nextstep, plan) AGENT(nextstep. hearer) CANDO(hearer, nextstep) Figure 2. CONTINUE-PLAN. The discourse plan in Figure 2, CONTINUE- PLAN, takes an already introduced plan as defined by the WANT prerequisite and moves execution to the next step, where the previously executed step is marked by the predicate LAST. One way of doing this is to request the hearer to perform the step that should occur after the previously executed step, assuming of course that the step is something the hearer actually can perform. This is captured by the decomposition together with the constraints. As above, the NEXT effect then updates the portion of the plan to be executed. This discourse plan captures the previously implicit relationship of coherent topic continuation in task-oriented dialogues (without interruptions), i.e. the fact that the discourse structure follows the task structure (Grosz [5]). Figure 3 presents CORRECT-PLAN, the last discourse plan to be discussed. CORRECT-PLAN inserts a repair step into a pre-existing plan that would otherwise fail. More specifically, CORRECT-PLAN takes a pre-existing plan having subparts that do not interact as expected during execution, and debugs the plan by adding a new goal to restore the expected interactions. The pre-existing plan has subparts laststep and nextstep, where laststep was supposed to enable the performance of nextstep, but in reality did not. The plan is corrected by adding newstep, which HEADER: PREREQUISITES: DECOMPOSITION-l: DECOMPOSITION-2: EFFECTS: CONSTRAINTS: CORRECT-PLAN(speaker. hearer, laststep, newstep, nextstep, plan) WANT(hearer, plan) LAST(laststep. plan) REQUEST(speaker, hearer, newstep) REQUEST(speaker, hearer, nextstep) STEP(newstep. plan) AFTER(laststep. newstep, plan) AFTER(newstep. nextstep, plan) NEXT(newstep. plan) STEP(laststep. plan) STEP(nextstep+ plan) AFTER(laststep, nextstep, plan) AGENT(newstep. hearer) "CANDO(speaker. nextstep) MODIFIES(newstep, laststep) ENABLES(newstep. nextstep) Figure 3. CORRECT-PLAN. enables the performance of nextstep and thus of the rest of plan. The correction can be introduced by a REQUEST for either nextstep or newstep. When nextstep is requested, the hearer has to use the knowledge that ne.rtstep cannot currently be per- formed to infer that a correction must be added to the plan. When newstep is requested, the speaker expli- citly provides the correction. The effects and con- straints capture the plan situation described above and should be self-explanatory with the exception of two new terms. MODIFIES(action2, actionl) means that action2 is a variant of action1, for example, the same action with different parameters or a new action achieving the still required effects. ENABLES(action1, action2) means that false prere- quisites of action2 are in the effects of action1. CORRECT-PLAN is an example of a topic interrup- tion that relates to a previous topic, To illustrate how these discourse plans represent the relationships between utterances, consider a naturally-occurring protocol (Sidner [22]) in which a user interacts with a person simulating an editing sys- tem to manipulate network structures in a knowledge representation language: 1) User: Hi. Please show the concept Person. 2) System: Drawing...OK. 3) User: Add a role called hobby. 4) System: OK. 5) User: Make the vr be Pastime. Assume a typical task plan in this domain is to edit a structure by accessing the structure then performing a sequence of editing actions. The user's first request thus introduces a plan to edit the concept person. Each successive user utterance continues through the plan by requesting the system to perform the various editing actions. More specifically, the first utterance would correspond to INTRODUCE-PLAN (User, Sys- tem, show the concept Person, edit plan). Since one of 217 the effects of INTRODUCE-PLAN is that the system adopts the plan, the system responds by executing the next action in the plan, i.e. by showing the concept Person. The user's next utterance can then be recog- nized as CONTINUE-PLAN (User, System, show the concept Person, add hobby role to Person. edit plan), and so on. Now consider two variations of the above dialo- gue. For example, imagine replacing utterance (5) with the User's "No, leave more room please." In this case, since the system has anticipated the require- ments of future editing actions incorrectly, the user must interrupt execution of the editing task to correct the system, i.e. CORRECT-PLAN(User. System, add hobby role to Person, compress the concept Person, next edit step, edit plan). Finally. imagine that utter- ance (5) is again replaced, this time with "Do you know if it's time for lunch yet?" Since eating lunch cannot be related to the previous editing plan topic, the system recognizes the utterance as a total change of topic, i.e. INTRODUCE-PLAN(User, System, Sys- tem tell User if time for lunch, eat lunch plan). RECOGNIZING DISCOURSE PLANS This section presents a computational algorithm for the recognition of discourse plans. Recall that the previous lack of such an algorithm was in fact a major force behind the last section's plan-based formaliza- tion of the linguistic relationships. Previous work in the area of domain plan recognition (Allen and Per- rault [1], Sidner and Israel [21]. Carberry [2], Sidner [24]) provides a partial solution to the recognition problem. For example, since discourse plans are represented identically to domain plans, the same pro- cess of plan recognition can apply to both. In particu- lar, every plan is recognized by an incremental process of heuristic search. From an input, the plan recognizer tries to find a plan for which the input is a step, 4 and then tries to find more abstract plans for which the postulated plan is a step, and so on. After every step of this chaining process, a set of heuristics prune the candidate plan set based on assumptions regarding rational planning behavior. For example, as in Allen and Perrault [1] candidates whose effects are already true are eliminated, since achieving these plans would produce no change in the state of the world. As in Carberry [2] and Sidner and Israel [21] the plan recog- nition process is also incremental; if the heuristics cannot uniquely determine an underlying plan, chain- ing stops. As mentioned above, however, this is not a full solution. Since the plan recognizer is now recognizing discourse as well as domain plans from a single utter- ance, the set of recognition processes must be coordi- aPlan chaining can also be done ~ia effects and prerequisites. To keep the example in the next section simple, plans have been nated. 5 An algorithm for coordinating the recognition of domain and discourse plans from a single utterance has been presented in Litman and Alien [9,11]. In brief, the plan recognizer recognizes a discourse plan from every utterance, then uses a process of constraint satisfaction to initiate recognition of the domain and any other discourse plans related to the utterance. Furthermore, to record and monitor execution of the discourse and domain plans active at any point in a dialogue, a dialogue context in the form of a plan stack is built and maintained by the plan recognizer. Various models of discourse have argued that an ideal interrupting topic structure follows a stack-like discip- line (Reichman [17], Polanyi and Scha [15], Grosz and Sidner [7]). The plan recognition algorithm will be reviewed when tracing through the example of the next section. Since discourse plans reflect linguistic relation- ships between utterances, the earlier work on domain plan recognition can also be augmented in several other ways. For example, the search process can be constrained by adding heuristics that prefer discourse plans corresponding to the most linguistically coherent continuations of the dialogue. More specifically, in the absence of any linguistic clues (as will be described below), the plan recognizer will prefer rela- tionships that, in the following order: (1) continue a previous topic (e.g. CONTINUE- PLAN) (2) interrupt a topic for a semantically related topic (e.g. CORRECT-PLAN, other corrections and clarifications as in Litman [10]) ('3) interrupt a topic for a totally unrelated topic (e.g. INTRODUCE-PLAN). Thus, while interruptions are not generally predicted, they can be handled when they do occur. The heuris- tics also follow the principle of Occam's razor, since they are ordered to introduce as few new plans as pos- sible. If within one of these preferences there are still competing interpretations, the interpretation that most corresponds to a stack discipline is preferred. 'For example, a continuation resuming a recently inter- rupted topic is preferred to continuation of a topic interrupted earlier in the conversation. Finally, since the plan recognizer now recognizes implicit relationships between utterances, linguistic clues signaling such relationships (Grosz [5], Reich- man [17], Polanyi and Scha [15], Sidner [24], Cohen [3], Grosz and Sidner [7]) should be exploitable by the plan recognition algorithm. In other words, the plan recognizer should be aware of correlations between expressed so that chaining via decompositions is sufficient. 5Although Wilensky [26] introduced meta-plans into a natur- al language system to handle a totally different issue, that of con- current goal interaction, he does not address details of coordina- tion. 218 specific words and the discourse plans they typically signal. Clues can then be used both to reinforce as well as to overrule the preference ordering given above. In fact, in the latter case clues ease the recog- nition of topic relationships that would otherwise be difficult (if not impossible (Cohen [3], Grosz and Sidner [7], Sidner [24])) to understand. For example, consider recognizing the topic change in the tape vari- ation earlier, repeated below for convenience: Could you mount a magtape for me? It's snowing like crazy. Using the coherence preferences the plan recognizer first tries to interpret the second utterance as a con- tinuation of the plan to mount a tape, then as a related interruption of this plan. and only when these efforts fail as an unrelated change of topic. This is because a topic change is least expected in .the unmarked case. Now, imagine the speaker prefacing the second utterance with a clue such as "incidentally," a word typically used to signal topic interruption. Since the plan recognizer knows that "incidentally" is a signal for an interruption, the search will not even attempt to satisfy the first preference heuristic since a signal for the second or third is explicitly present. EXAMPLE This section uses the discourse plan representa- tions and plan recognition algorithm of the previous sections to illustrate the processing of the following dialogue, a slightly modified portion of a scenario (Sidner and Bates [23]) developed from the set of pro- tocols described above: User: Show me the generic concept called "employee." System:OK. <system displays network> User: No, move the concept up. System:OK. <system redisplays network> User: Now, make an individual employee concept whose first name is "Sam" and whose last name is "Jones." Although the behavior to be described is fully speci- fied by the theory, the implementation corresponds only to the new model of plan recognition. All simu- lated computational processes have been implemented elsewhere, however. Litman [10] contains a full discus- sion of the implementation. Figure 4 presents the relevant domain plans for this domain, taken from Sidner and Israel [21] with minor modifications. ADD-DATA is a plan to add new data into a network, while EXAMINE is a plan to examine parts of a network. Both plans involve the subplan CONSIDER-ASPECT, in which the user con- siders some aspect of a network, for example by look- ing at it (the decomposition shown), listening to a description, or thinking about it. The processing begins with a speech act analysis of "Show me the generic concept called 'employee'" HEADER: ADD-DATA(user. netpiece, data, screenLocation) DECOMPOSITION: CONSIDER-ASPECT(user. netpiece) PUT(system, data, screenLocation) HEADER: EXAMINE(user. netpiece) DECOMPOSITION: CONSIDER-ASPECT(user, netpiece) HEADER: CONSIDER-ASPECT(user, netpiece) DECOMPOSITION: DISPLAY(system. user. netpiece) Figure 4. Graphic Editor Domain Plans. REQUEST (user. system. DI:DISPLAY (sys- tem, user, El)) where E1 stands for "the generic concept called 'employee.'" As in Allen and Perrault [1], determina- tion of such a literal 6 speech act is fairly straightfor- ward. Imperatives indicate REQUESTS and the pro- positional content (e.g. DISPLAY) is determined via the standard syntactic and semantic analysis of most parsers. Since at the beginning of a dialogue there is no discourse context, the plan recognizer tries to intro- duce a plan (or plans) according to coherence prefer- ence (3). Using the plan schemas of the second sec- tion, the REQUEST above, and the process of for- ward chaining via plan decomposition, the system pos- tulates that the utterance is the decomposition of INTRODUCE-PLAN( user, system. Dr, ?plan), where STEP(D1, ?plan) and AGENT(D1, system). The hypothesis is then evaluated using the set of plan heuristics, e.g. the effects of the plan must not already be true and the constraints of every recog- nized plan must be satisfiable. To "satisfy the STEP constraint a plan containing D1 will be created. Noth- ing more needs to be done with respect to the second constraint since it is already satisfied. Finally, since INTRODUCE-PLAN is not a step in any other plan, further chaining stops. The system then expands the introduced plan con- taining D1, using an analogous plan recognition pro- cess. Since the display action could be a step of the CONSIDER-ASPECT plan, which itself could be a step of either the ADD-DATA or EXAMINE plans, the domain plan is ambiguous. Note that heuristics can not eliminate either possibility, since at the begin- ning of the dialogue any domain plan is a reasonable expectation. Chaining halts at this branch point and since no more plans are introduced the process of plan recognition also ends. The final hypothesis is that the 6See Litman [10] for a discussion of the treatment of indirect speech acts (Searle [20]). 219 user executed a discourse plan to introduce either the domain plan ADD-DATA or EXAMINE. Once the plan structures are recognized, their effects are asserted and the postulated plans are expanded top down to include any other steps (using the information in the plan descriptions). The plan recognizer then constructs a stack representing each hypothesis, as shown in Figure 5. The first stack has PLAN1 at the top, PLAN2 at the bottom, and encodes the information that PLAN1 was executed while PLAN2 will be executed upon completion of PLAN1. The second stack is analogous. Solid lines represent plan recognition inferences due to forward chaining, while dotted lines represent inferences due to later plan expansion. As desired, the plan recognizer has constructed a plan-based interpretation of the utter- ance in terms of expected discourse and domain plans, an interpretation which can then be used to construct and generate a response. For example, in either hypothesis the system can pop the completed plan introduction and execute D1, the next action in both domain plans. Since the higher level plan containing DI is still ambiguous, deciding exactly what to do is an interesting plan generation issue. Unfortunately, the system chooses a display that does not allow room for the insertion of a new con- cept, leading to the user's response "No, move the con- cept up." The utterance is parsed and input to the plan recognizer as the clue word "no" (using the plan recognizer's list of standard linguistic clues) followed by the REQUEST(user, system, Ml:MOVE(system, El, up)) (assuming the resolution of "the concept" to El). The plan recognition algorithm then proceeds in both contexts postulated above. Using the knowledge that "no" typically does not signal a topic continuation, the plan recognizer first modifies its default mode of processing, i.e. the assumption that the REQUEST is a CONTINUE-PLAN (preference 1) is overruled. Note, however, that even without such a linguistic clue recognition of a plan continuation would have ulti- mately failed, since in both stacks CONTINUE- PLAN's constraint STEP(M1, PLAN2/PLAN3) would have failed. The clue thus allows the system to reach reasonable hypotheses more efficiently, since unlikely inferences are avoided. Proceeding with preference (2), the system postu- lates that either PLAN2 or PLAN3 is being corrected, i.e., a discourse plan correcting one of the stacked plans is hypothesized. Since the REQUEST matches both decompositions of CORRECT-PLAN, there are two possibilities: CORRECT-PLAN(user, system, ?laststep, M1, ?nextstep, ?plan), and CORRECT- PLAN(user, system, ?laststep, ?newstep, M1, ?plan), where the variables in each will be bound as a result of constraint and prerequisite satisfaction from appli- cation of the heuristics. For example, candidate plans are only reasonable if their prerequisites were true, i.e. (in both stacks and corrections) WANT(system, '?plan) and LAST(?laststep, ?plan). Assuming the plan was executed in the context of PLAN2 or PLAN3 (after PLAN1 or PLANIa was popped and the DISPLAY performed), ?plan could only have been bound to PLAN2 or PLAN3. and ?laststep bound to DI. Satisfaction of the constraints eliminates the PLAN3 binding, since the constraints indicate at least two steps in the plan, while PLAN3 contains a single step described at different levels of abstraction. Satis- faction of the constraints also eliminates the second CORRECT-PLAN interpretation, since STEP( M1. PLAN2) is not true. Thus only the first correction on the first stack remains plausible, and in fact, using PLAN2 and the first correction the rest of the con- straints can be satisfied. In particular, the bindings yield PLAN1 [completed] INTRODUCE-PLAN(user ,system ,D1 ,PLAN2) REQUEST(u!er,system.D1) [LAST] PLAN2 ADD-DATA(user, El, '?data, ?loc) CONSIDER-~EIi' PUTis';siem.?d at a,?loc Dl:DISPLA~(system.user.E 1) [NEXT] PLANla [completed] [NTRODUCE-PLAN(user,system.DI.PLAN3) REQUEST(us!r.system.D1) [LAST] PLAN3 EXAMINE(user,E 1) CONSIDER-AS~ECT(user.E 1) D l:DISPLAY(sys!em.user.E 1) [NEXT] Figure 5. The Two Plan Stacks after the First Utterance. 220 (1) STEP(D1, PLAN2) (2) STEP(P1, PLAN2) (3) AFTER(D1, P1, PLAN2) (4) AGENT(M1, system) (5)-CANDO(user, P1) (6) MODIFIES(M1, D1) (7) ENABLES(M l, Pl) where Pl stands for PUT(system, ?data, ?loc). resulting in the hypothesis CORRECT-PLAN(user. system, D1, M1, Pl, PLAN2). Note that a final possi- ble hypothesis for the REQUEST, e.g. introduction of a new plan. is discarded since it does not tie in with any of the expectations (i.e. a preference (2) choice is preferred over a preference (3) choice). The effects of CORRECT-PLAN are asserted (M1 is inserted into PLAN2 and marked as NEXT) and CORRECT-PLAN is pushed on to the stack suspending the plan corrected, as shown in Figure 6. The system has thus recognized not only that an interruption of ADD-DATA has occurred, but also that the relationship of interruption is one of plan correction. Note that unlike the first utterance, the plan referred to by the second utterance is found in the stack rather than constructed. Using the updated stack, the system can then pop the completed correc- tion and resume PLAN2 with the new (next) step M1. The system parses the user's next utterance ("Now, make an individual employee concept whose first name is 'Sam' and whose last names is 'Jones'") and again picks up an initial clue word, this time one that explicitly marks the utterance as a continuation and thus reinforces coherence preference (1). The utterance can indeed be recognized as a continuation of PLAN2, e.g. CONTINUE-PLAN( user, system, M1, MAKE1, PLAN2), analogously to the above detailed explanations. M1 and PLAN2 are bound due to prerequisite satisfaction, and MAKE1 chained through P1 due to constraint satisfaction. The updated stack is shown in Figure 7. At this stage, it would then be appropriate for the system to pop the completed CONTINUE plan and resume execution of PLAN2 by performing MAKEI. PLAN4 [completed] C l:CORRECT-PLAN(user,syste rn.D1.M1,P1.PLAN2) REQUEST(user!systern.M 1) [LAST] PLAN2 CONSIDER- S~CT(user,E1) Dl:DISPLAY/system,user,E 1) [LAST] ADD-DATA(user.E 1,?dat a,?loc) [NEXT] P l:PUT(sys-Tgm.?dat a.?ioc) Figure 6. The Plan Stack after the User's Second Utterance. [completed] CONTINUE-PLAN(user,system,M 1,MAKE 1.PLAN2) REQUEST(user,sy!tem,MAKE 1) [LAST] PLAN2 C ON SI DE R-~'P-'E-CT ( u s e r,E 1) Dl:DISPLAYtsystem,user,E 1 ) ADD-DATA(user,E 1.SamJones,?loc) ~ P ) Pl:PUT(system,SamJones,?loc) [LAST] I MAKE1 MAKE [ , :, (system.user.Sam Jones) [NEXT] Figure 7. Continuation of the Domain Plan. 221 CONCLUSIONS This paper has presented a framework for both representing as well as recognizing relationships between utterances. The framework, based on the assumption that people's utterances reflect underlying plans, reformulates the complex inferential processes relating utterances within a plan-based theory of dialogue understanding. A set of meta-plans called discourse plans were introduced to explicitly formalize utterance relationships in terms of a small set of underlying plan manipulations. Unlike previous models of coherence, the representation was accom- panied by a fully specified model of computation based on a process of plan recognition. Constraint satisfaction is used to coordinate the recognition of discourse plans, domain plans, and their relationships. Linguistic phenomena associated with coherence rela- tionships are used to guide the discourse plan recogni- tion process. Although not the focus of this paper, the incor- poration of topic relationships into a plan-based framework can also be seen as an extension of work in plan recognition. For example, Sidner [21,24] analyzed debuggings (as in the dialogue above) in terms of multiple plans underlying a single utterance. As discussed fully in Litman and Allen [11], the representation and recognition of discourse plans is a systemization and generalization of this approach. Use of even a small set of discourse plans enables the principled understanding of previously problematic classes of dialogues in several task-oriented domains. Ultimately the generality of any plan-based approach depends on the ability to represent any domain of discourse in terms of a set of underlying plans. Recent work by Grosz and Sidner [7] argues for the validity of this assumption. ACKNOWLEDGEMENTS I would like to thank Julia Hirschberg, Marcia Derr, Mark Jones, Mark Kahrs, and Henry Kautz for their helpful comments on drafts of this paper. REFERENCES 1. J. F. Allen and C. R. Perrault, Analyzing Intention in Utterances, Artificial Intelligence 15, 3 (1980), 143-178. 2. S. Carberry, Tracking User Goals in an Information-Seeking Environment, AAAI, Washington, D.C., August 1983.59-63. 3. R. Cohen, A Computational Model for the Analysis of Arguments, Ph.D. Thesis and Tech. Rep. 151, University of Toronto. October 1983. 4. R.E. Fikes and N. J. Nilsson, STRIPS: A new Approach to the Application of Theorem Proving to Problem Solving, Artificial Intelligence 2, 3/4 (1971), 189-208. 5. B.J. Grosz, The Representation and Use of Focus in Dialogue Understanding, Technical Note 151, SRI, July 1977. 6. B.J. Grosz, A. K. Joshi and S. Weinstein, Providing a Unified Account of Definite Noun Phrases in Discourse. ACL. MIT, June 1983, 44- 50. 7. B.J. Grosz and C. L. Sidner, Discourse Structure and the Proper Treatment of Interruptions, IJCAI, Los Angeles, August 1985, 832-839. 8. J.R. Hobbs, On the Coherence and Structure of Discourse, in The Structure of Discourse, L. Polanyi (ed.), Ablex Publishing Corporation, Forthcoming. Also CSLI (Stanford) Report No. CSLI-85-37, October 1985. 9. D.J. Litman and J. F. Allen, A Plan Recognition Model for Clarification Subdialogues, Coling84, Stanford, July 1984, 302-311. 10. D. J. Litman, Plan Recognition and Discourse Analysis: An Integrated Approach for Understanding Dialogues, PhD Thesis and Technical Report 170, University of Rochester, 1985. 11. D.J. Litman and J. F. Allen. A Plan Recognition Model for Subdialogues in Conversation, Cognitive Science, , to appear. , Also University of Rochester Tech. Rep. 141, November 1984. 12. W. Mann, Corpus of Computer Operator Transcripts, Unpublished Manuscript, ISI, 1970's. 13. W. C. Mann, Discourse Structures for Text Generation, Coling84, Stanford, July 1984, 367- 375. 14. K. R. McKeown, Generating Natural Language Text in Response to Questions about Database Structure, PhD Thesis, University of Pennsylvania, Philadelphia, 1982. 15. L. Polanyi and R. J. H. Scha, The Syntax of Discourse, Text (Special Issue: Formal Methods of Discourse Analysis) 3, 3 (1983), 261-270. 16. R. Reichman, Conversational Coherency, Cognitive Science 2, 4 (1978), 283-328. 17. R. Reichman-Adar, Extended Person-Machine Interfaces, Artificial Intelligence 22, 2 (1984), 157-218. 18. E. D. Sacerdoti, A Structure for Plans and Behavior. Elsevier, New York, 1977. 19. J. R. Searle, in Speech Acts, an Essay in the Philosophy of Language, Cambridge University Press, New York, 1969. 20. J.R. Searle, Indirect Speech Acts, in Speech Acts, vol. 3, P. Cole and Morgan (ed.), Academic Press. New York, NY, 1975. 222 21. C. L. Sidner and D. J. Israel. Recognizing Intended Meaning and Speakers' Plans, IJCAI. Vancouver, 1981, 203-208. 22. C. L. Sidner, Protocols of Users Manipulating Visually Presented Information with Natural Language, Report 5128. Bolt Beranek and Newman , September 1982. 23. C. L. Sidner and M. Bates. Requirements of Natural Language Understanding in a System with Graphic Displays. Report Number 5242, Bolt Beranek and Newman Inc.. March 1983. 24. C.L. Sidner. Plan Parsing for Intended Response Recognition in Discourse, Computational Intelligence 1, 1 (February 1985). 1-10. 25. M. Stefik, Planning with Constraints (MOLGEN: Part 1), Artificial Intelligence 16, (1981), 111-140. 26. R. Wilensky, Planning and Understanding. Addison-Wesley Publishing company, Reading, Massachusetts, 1983. 223 t
1986
33
The Structure of User-Adviser Dialogues: Is there Method in their Madness? Raymonde Guindon Microeleetronies and Computer Technology Corporation - MCC Paul Sladky University of Texas, Austin 8J MCC Hans Brunner Joyee Conner Honeywell - Computer Sciences Center MCC ABSTRACT FOCUSING AND ANAPHORA RESOLUTION Novice users engaged in task-oriented dialogues with an adviser to learn how to use an unfamiliar statistical package. The users', task was analyzed and a task structure was derived. The task structure was used to segment the dialogue into subdialogues associated with the subtasks of the overall task. The representation of the dialogue structure into a hierarchy of subdialogues, partly corresponding to the task structure, was validated by three converging analyses. First, the distribution of non-pronominal noun phrases and the distribution of pronominal noun phrases exhibited a pattern consistent with the derived dialogue structure. Non-pronominal noun phrases occurred more frequently at the beginning of subdialogues than later, as can be expected since one of their functions is to indicate topic shifts. On the other hand, pronominal noun phrases occurred less frequently in the first sentence of the subdialogues than in the following sentences of the subdialogues, as can be expected since they are used to indicate topic continuity. Second, the distributions of the antecedents of pronominal noun phrases and of non-pronominal noun phrases showed a pattern consistent with the derived dialogue structure. FinMly, distinctive clue words and phrases were found reliably at the boundaries of subdialogues with different functions. INTRODUCTION The goal of this paper is to find evidence for the notion of dialogue structure as it has been developed in computational linguistics (Grosz, 1977; Sidner and Grosz, 1985). The role of two hypothesized determinants of discourse structure will be examined: i) the structure of the task that the user is trying to accomplish and the user's goals and plans arising from the task; 2) the strategies available to the user when the user is unable to achieve the task or parts of the task (i.e., meta-plans). The study of dialogue structures is important because computationally complex phenomena such as anaphora resolution have been theoretically linked to the task and dialogue structures. Dialogue Structure: A Key to Computing Focus Given the computational expense of searching, of inferential processing, and of semantic consistency checking required to resolve anaphors, restricting the search a priori to a likely set of antecedents seems advantageous. The a priori restriction on the set of potential antecedents for anaphora resolution has been called focusing (Grosz, 1977; Guindon, 1985; Reichman, 1981; Sidner, 1983). Grosz defines a focus space as that subset of the participant's total knowledge that is in the focus of attention and that is relevant to process a discourse segment. Task-oriented dialogues are dialogues between conversants whose goals are to accomplish some specific tasks by exchanging information through the dialogues. Task-oriented dialogues are believed to exhibit a structure corresponding to the structure of the task being performed. The entire dialogue is segmented into subordinated subdialogues in a manner parallel to the segmentation of the whole task into subordinated subtasks. Grosz (1977) assumes that the task hierarchy imposes a hierarchy on the subdialogue segments. As a subtask of the task is performed (and its corresponding subdialogue is expressed), the different objects and actions associated with this subtask come into focus. As this subtask is completed (and its corresponding subdialogue), its associated objects and actions leave focus. The task of which the completed subtask is a part then returns in focus. The segmentation of a dialogue into interrelated subdialogues is associated with shifts in focus occurring during the dialogue. Detailed task structures for each problem given in this study can be found in Guindon, Sladky, Brunner, and Conner (1986). A cognitive model of anaphora resolution and focusing is provided in Guindon (1985) and Kintsch and van Dijk (1978). Human memory is divided into a short-term memory and a long- term memory. Short-term memory is divided into a cache and a buffer. The cache contains items from previous sentences and the buffer holds the incoming sentence. Short-term memory can only contain a small number of text items and its retrieval time is fast. Long-term memory can contain a very large number of text items but its retrieval time is slow. During the integration of a new sentence, the T most important and R most recent items in short- term memory are held over in the cache. Items in focus are the items in the cache and are more rapidly retrieved. Items not in focus are items in long-term memory and are more slowly retrieved. Because the cache contains important items that are not necessarily recent, pronouns can be used to refer to items that have been mentioned many sentences back. An empirical study demonstrates the cognitive basis for focusing, topic shifts, the use of pronominal noun phrases to refer to antecedents in focus, and the use of non-pronominal noun phrases to refer to antecedents not in focus. 224 Gross and Sidner (1985) distinguishes three structures in a discourse structure: 1) the structure of the sequence of utterances, 2) the structure of the intentions conveyed, and 3) the attentional state. Distinguishing these three structures gives a better account of discourse phenomena such as boundary markers, anaphors, and interruptions. This paper will cover mainly the second structure and will attempt to find evidence linking the dialogue structure to the task structure. The main point is that the structure of the intentions conveyed in the discourse should mirror to some extent the task structure (but see the next section). The first structure of the dialogue, the structure of the sequence of utterances, will actually be examined with the pronominal and non-pronominal noun phrase distributions, the antecedent distribution, and the boundary marker analyses. We expect that these three analyses will support the derived dialogue structure, the intentional structure. The last structure, the attentional structure, is not discussed here but has been discussed in Guindon (1985). ',\ The main point of "focusing" theories of anaphora resolution is that the discourse structure, based on the task structure, is a crucial determinant of which discourse entities are held in focus and are readily accessible for anaphora resolution. Subdialogues that are in focus are contexts that are used to restrict the search for antecedents of anaphors. Task Structure Can Only Partially Determine Dialogue Structure In any case, the task structure can only partially determine the goals and plans of the novice user and, indirectly, the dialogue structure. This is because the novice user does not have a good model of the task and is in the process of building one and because the adviser only has a partially correct model of what the novice user knows about the task. The verbal interaction between the user and the adviser is not just one of execution of plans and recognition of plans but rather one of situated actions and detection and repair of imperfect understanding (Suchman, 1985). As a consequence, the dialogue structures from our data contained subdialogues that functioned as clarification (i.e., request of information) to correct imperfect understanding or as acknowledgement to verify understanding between the participants. The notion of meta-plans allows us to account for the presence of clarification and acknowledgement subdialogues (see Litman and Allen, 1984). RESEARCH GOALS There are many unanswered questions about the nature of dialogue structures, about the validity and usefulness of the concept of a dialogue structure, about the role of the task structure in determining dialogue structure, and in the contribution of the task structure to focusing and anaphora resolution. For example, the precise mechanisms to determine the initial focus and to update it on the basis of the dialogue structure are still unknown (Sidner, 1983). The goal of this paper is to find evidence for the validity of the notion of discourse structure derived from the task structure by: 1) describing a technique to derive the structure of dialogues and 2) validating the derived dialogue structure by three independent converging analyses: a) the distribution of non- pronominal and pronominal noun phrases b) the distribution of antecedents of pronominal and non-pronominal anaphors, and c) the presence of subdialogue boundary markers, If complete subdialogucs get into and out of focus and if subdialogues are conceived as contexts restricting the set of antecedents to be searched and tested during anaphora resolution, identifying the appropriate unit of discourse corresponding to these subdialogues is crucial. One phenomenon that should have correspondence to the dialogue structure is the distribution of non-pronominal and pronominal noun phrases. Non-pronominal noun phrases can be used to introduce new entities in the dialogue or to reinstate into focus a previous dialogue entity out of focus. In other words, non- pronominal noun phrases are used to indicate topic shifts. As a consequence, they should tend to occur more frequently at the beginning of the subdialogues than later in the subdialogues. On the other hand, pronominal noun phrases are used to refer to entities currently in focus. In other words, pronominal noun phrases are used to indicate topic continuity. As a consequence, they should tend to occur less frequently in the first sentence of a subdialogue but more frequently in subsequent sentences. Empirical evidence for these claims are presented in Guindon (1985). She found that anaphora resolution time is faster for pronominal noun phrases whose antecedents are in focus than for those whose antecedents are not in focus. On the other hand, she found faster anaphora resolution time for non-pronominal noun phrases whose antecedents were not in focus than for those whose antecedents were in focus. In other words, the form of the anaphor signals whether the antecedent is in focus (as when the anaphor is pronominal) or not in focus (as when the anaphor is non-pronominal). Grosz, Joshi, and Weinstein (1983) have made similar claims about the role of non-pronominal definite noun phrases and pronominal definite noun phrases. In linguistics, Clancey (cited in Fox, 1985) found that the use of definite non-pronominM noun phrases was associated with episode boundaries. Psychological evidence has shown the special status in memory for certain sentences in discourse found at the beginning of paragraphs. Sentences which belong to the macrostructure (i.e. gist) of the discourse have been shown to be recognized with more accuracy and faster than sentences belonging to the microstructnre (Guindon and Kintsch, 1984). Macrostructure sentences are by definition more abstract and important than microstructure sentences. They express a summary of the or part of the discourse. The macrostructure sentences tend to be the first sentences in paragraphs and be composed of non-pronominal definite noun phrases (van Dijk and Kinstch, 1983). Linde (1979) observed the distribution of it and that in descriptions of houses or apartments. She found that shifts in focus were associated with change in the room described. The pronoun it was used to describe objects in focus either associated with the room then described or to the entire apartment even when the apartment itself had not been mentioned for many sentences. The pronoun that was used either to refer to an object outside the focus or to an object in focus when the description of the object was in contrast with another description. Grosz (1977) observed a similar use of the pronoun it in her dialogues to the use of it in Linde's dialogues. 225 In summary, the most important sentences, often at the beginning of new paragraphs, tend to be composed of full definite noun phrases. These sentences often introduce a new discourse entity or reinstate a former one which was out of focus, creating a topic shift. Sentences which are nsubordinatedh to the most important sentence in the paragraph tend to be composed of pronouns and signal topic continuity. Another clue to dialogue structures is the distribution of antecedents of anaphors. Given that pronominals are used to refer to important or recent concepts (Guindon, 1985), the distribution of antecedents of pronominal anaphors should cluster in the current subdialogue (i.e. recency or importance), its parent (i.e. importance and recency), and the root subdialogue (i.e. importance). On the other hand, because non-pronominal anaphors are more informative than pronominal anaphors they may refer to antecedents that are more widespread in the dialogue, that is, antecedents that are not as recent or as important. Another obvious clue is the presence of reliable boundary markers for different subdialogue types. Some of these markers have been reported by Grosz (1977), Reichman (1981), and Polanyi and Scha (1983). The boundary markers found in our subdialogues should agree with those found in these previous analyses and extend them. Derivation of a dialogue structure on the basis of the task structure An important prerequisite in the interpretation of user-adviser dialogues is to analyze the task the users are trying to perform. A task analysis is a detailed description of the determinants of the user's behaviors arising from the task context. The first step in performing task analysis is to identify the objects involved in the task. In our case, these objects are vectors, matrices, rows, columns, variables, variable labels, etc. The second step is to identify all the operators in the task which when applied to one or more objects changes the. state of the completion of the task. In our case, these operators are function calls (e.g. mean, variance, sort), subsetting values from vectors, listing of values, etc. Of course, not every operator applies to every object. A third step is to identify the sequence of operators which would produce a desired state (the goal - e.g. the problem solved) from an initial state. Such a task analysis can be performed at many levels of abstraction, from high-level conceptual operators to low-level physical operators. The desired level of abstraction depends upon the level of abstraction of the behaviors that one wants to account for. Usually, the more complex or cognitive the task modelled, the more abstract or coarse the operators selected. In such case, the operators will reflect the specifics of the task environment, such as, vectors, matrices, screen, keyboard. The finer the grain of analysis, the more the operators are associated with basic motor, perceptual, or cognitive mechanisms. Since the task we are trying to model is quite cognitive in nature - solving statistical problems with an unfamiliar statistical package - an appropriate level of analysis seems to be at the level of the so-called GOMS model (Card, Moran, and Newell, 1983). GOMS stands for: (1) a set of Coals; 2) a set of Operators; 3) a set of Methods for achieving the goals; 4) a set of Selection rules for choosing among competing methods for goals. In the notation used in our examples, we have used a slightly different terminology and have used the term action instead of operator and use the term plan instead of method. We have also used the terms prerequisites, constraints, and meta-plans from artificial intelligence. The notion of meta-plans allowed us to account for the presence of clarification and acknowledgement subdialogues (see Litman and Alien, 1984) that could not be accounted directly by the task structure. We will now describe how the task structure was used in deriving the dialogue structure. Goal or plan subordination arises from the plan decomposition into subplans or from unsatisfied prerequisites. In a task structure, plans are composed of other plans themselves, leading to a hierarchical structure. In other words, a subgoal to a goal can arise from a plan decomposition into subplans or from the prerequisite conditions which must hold true before applying the plan. Here are the coding decisions used in deriving the dialogue structure: • If the user initiated a subdialogue consisting of the statement of a plan or of a goal, the subdialogue would be "inserted" in the task structure at the location of the plan described. • If the user initiated a subdialogue consisting of the statement of a subplan within the decomposition of its parent plan, the subdialogue would be "inserted" in the appropriate daughter subplan of the parent plan in the task structure. • If the user initiated a subdialogue consisting of a subplan arising from an unsatisfied prerequisite of a plan, then the subdialogue would be "inserted" as a daughter of the subdialogue associated with the plan. Clarification subdialogues arise from the restrictions on the meta-plans that the participants can use when they cannot achieve one of their plans: In our study, they must ask help to the adviser aloud. The meta-plan, ASK-ADVISER-HELP, itself has prerequisites, one of them being that the linguistic communication be successful. This leads to the linguistic clarification subdialogues that occur when there are ambiguities in the message that need to be resolved by requesting disambiguating information from the adviser. Another consequence of the meta-plan ASK-ADVISER- HELP is the presence of acknowledgement subdlalogues whereby participants ensure that the communication is successful by acknowledging that they have understood the message. Let's continue describing the coding scheme: • The clarification subdialogues are subordinated to the subdialogue mentioning the concept for which clarification is requested (e.g., goal, plan, term). • The acknowledgement subdialogues are subordinated to the subdialogue mentioning the acknowledged concept. • The linguistic clarification subdialogues are also subordinated to the subdialogue containing the utterance for which clarification is requested. • Since we are not fully modeling the user's task, subdialogues regarding the participants' behaviors as a subject in a study were ignored. 226 • Since knowing the required statistical formula and knowing how to use the console were required to solve all the problems, these prerequisites were not always encoded explicitly in the task structure. Nevertheless, the clarification and acknowledgement subdialogues regarding statistics and the use of the console were subordinated to the subdialogue associated with the plan for which these clarifications were necessary to obtain. DATA COLLECTION Overview of Data Collection Method Three novice users had basic knowledge of statistics. They had to use an unfamiliar statistical package to solve five simple descriptive statistics problems. There were two main restrictions imposed on the strategies employed to solve the problems: 1) the only source of information was the adviser; 2) all requests for information had to be said aloud. These restrictions were considered as restrictions on the mcta-plans available to the participants when unable to solve the problems. The participant, the adviser sitting to his/her right, and the console were videotaped. Coding of the Dialogues Each subdialogue was segmented into subdialogues which appeared to be the execution of a plan to satisfy a goal of the user or the adviser on the basis of the task structure. In addition to segmenting the dialogue into subdialogues, the relations between subdialogues were determined. One source of such relations is the decomposition of a total task into subtasks to be performed in some order. This decomposition is called the task structure (see Grosz, 1977) as described previously. Two important relations are subordination and enablement. Consider a dialogue occurring while performing a task, such as baking a cake, composed of three subtasks, (1) measure ingredients, (2) mix ingredients, (3) put the mixed ingredients in the oven. Subtasks 1, 2, and 3 are said to be subordinated to the task of baking a cake. Moreover, subtask 2 must precede subtask 3. Subtask 2 is said to enable subtask 3. The subdialoguss that would be instrumental to the execution of these subtasks would stand in the same relations. However, the decomposition of the task structure was not the only source of subordination and enablement relations between subdialogues. Clarification and acknowledgement subdialogucs even though they did not correspond to a subtask in the task structure were subordinated to the subdialogue introducing the clarified and acknowledged concept respectively. The coder then analyzed the distribution of non-pronominal noun phrases and pronominal noun phrases throughout the dialogue. The coder also noted words and phrases occurring at the boundaries of the subdialogues and mapped the distribution of the antecedents of pronominal and non-pronominal anaphors. ANALYSIS OF THE DIALOGUES ANALYSIS OF THE USERS' TASK Three main types of subdialogues have been encountered associated with each aspect of the task described above : 1. Plan-goal statement subdlalogues occur when the user describes a goal, or a plan, or the execution of actions composing the plan This type of subdialogue may be an adjunct to the goal or plan because expressing them verbally might not be essential for their satisfaction or realization (though expressing them verbally helps the adviser understand the user). 2. Clarification subdialogues occur when the user requests information from the adviser so that the user can satisfy a goal. In this study, these subdialogues arise from the constraints on the type of meta-plans available, ASK-ADVISER-HELP. There are two main types of clarification subdialogues: 1) those concerning the determination of goals and plans of the user (e.g., "What should I do next?", "How do I access a vector?"); 2) those concerning the arguments (or objects) in goals and plans (e.g., "What is a vector?"). In some cases, the clarification subdialogues arise from the prerequisite on the recta-plan, that is, assure mutual understanding. For example, the user will verify that he/she has identified the correct referent for an anaphor in the adviser's utterances. 3. Acknowledgement subdialogues occur when the user informs the adviser that he/she believes that he/she has understood an explanation. They arise from the prerequisite on the recta-plan, that is, assure mutual understanding. A small subset of the graphical representation of a simplified subtask structure and of dialogue segmentation and structure is given in Figure 1 to show how the task structure partially influences the dialogue structure. [TSZ S I DaLOGUE s'rRuCrU ACC~l "~aff' i ;. Z,:'L~., gXPERT: AUTO ~ a m~. ~ ly ~ "~'. / CLAm~tCATLON. CS l~ ~ kcy~ea) How do I ~ntcf ~ ~w~ Figure 1: TASK AND DIALOGUE STRUCTURES 227 DISTRIBUTION OF NON-PRONOMINAL AND PRONOMINAL NOUN PHRASES Non-pronominal noun phrases play a role in indicating and realizing topic shifts in a dialogue. Since new subdialogues are assumed to correspond to topic shifts, one can predict that non- pronominal noun phrases will tend to occur more frequently at the beginning of subdialogues than later in the subdialogues. On the other hand, pronominal noun phrases play a role in indicating and realizing topic continuity in a dialogue. Since new topics are introduced at the beginning of new subdialogues and developed in the following sentences, one can predict that pronominal noun phrases will tend to occur more frequently after the first sentence in the subdialogues. As can be seen in Table 1, there is a clear trend for the number of non-pronominal noun phrases to decrease as the subdialogue progresses, especially for the most frequent subdialoguc lengths (i.e., 2 and 3 sentences), but less marked for the most infrequent subdialogue lengths (i.e., 4 and 5 sentences). Moreover, there is a clear increase in the number of pronominal noun phrases from the first sentence to the second sentence in the subdialogues, though again less reliable for the least frequent subdialgue lengths (i.e., 4 and 5 sentences). A complete statistical analysis of these data is presented in Guindon, Sladky, Brunner, and Conner (1986). Table 1: DISTRIBUTION OF NOUN PHRASES NON-PRONOMINALNOUN PHP.~ n~r SUBDL~GA)GUELENGTB ~ $ENTF~C~ 2 3 4 5 $1 234 99 30 28 $2 114 76 49 21 $3 46 30 22 $4 29 20 $5 11 PRONOMINAl, NOUN PHRASE~ S¢m¢~ SUBDIALOGUE LENGTH IN SF.IqI~NCES ~mbcr 2 3 4 5, S1 13 2 5 0 $2 24 15 4 5 $3 9 11 2 $4 6 4 $5 8 The observed distributions o£ non-pronominal and pronominal noun phrases follow the predictions arising from previous work in linguistics and psychology. Because this analysis was performed independently of the dialogue segmentation and subordination, it is a converging analysis and it supports the derived dialogue structure on the basis of the task structure and the users' and adviser's plans and goals. This analysis supports the value of the concept of a dialogue structure and also support our proposed scheme to derive such dialogue structures. DISTRIBUTION OF THE ANTECEDENTS OF ANAPHORS The subdialogues were indexed as shown in Table 2. The current subdialogue, labelled N, is the location of the anaphor to be resolved. All subdiMogues are indexed relative to the current subdialogue N. Thus, the node N-1 immediately dominates N, the node N-2 dominates N-I, and so on. The nodes subordinate to each of the nodes dominating N are indexed beginning with the left-most node and proceeding rightward. Thus, if N-1 is the first node dominating N, the left-most node subordinate to N-1 will be N-l/L1 and each sibling to the right will be N-l/L2, N-l/L3, etc. N-3 N-3 N-2 L1 L2 -2 N ~ - N-3 N-3 N 1 (L1)I ( L 1 ) 2 , / / ~ L2 ~ ~ N-3 N-2 N-2 N-1 N-1 ((Li)l)l (L1)I (L1)2 L1 L2 Table 2: INDEXING OF THE SUBDIALOGUES Anaphoric - Pronominal Noun Phrases Pronominal anaphors are used to refer to discourse entities that are in focus. Such entities should be either recent or of primary importance in the dialogue, Figure 2 represents graphically the distribution of the antecedents of pronominal noun phrases with a band, with highest frequencies shown with the widest bands. For sake of brevity, the exact frequencies are not reported here but can be found in Guindon, Sladky, Brunner, and Conner (1986). Figure z shows that the majority of pronominal antecedents are located in the current subdialogue, with their frequency decreasing as distance from the anaphor increases. The current subdialogue contains recent antecedents. Then, they are most frequently found in the parent subdialogue which contains important and recent antecedents. Finally, a few pronominal anaphors (i.e. it) have their antecedent (i.e., the statistical package) found in the root subdialogue which contains important antecedents. Grosz (1977) also observed the use of it to refer to an important concept that had not been mentioned for many sentences. These data demonstrate the existence of constraints at the dialogue level on the distribution of the antecedents of ANTECEDENT DISTRIBUTION Frequent Unfr~quen! = [--'1 A subdkdogu¢ ¢ PrunominM Noun Phrase Non-prtmominal Noun Phrase Figure 2: ANTECEDENT DISTRIBUTION pronominal anaphors: most antecedents are located in the current subdialogue or in its immediate superordinate and a few antecedents co-specifying the main topic(s) of the dialogue are located at the root of the dialogue. 228 These data strongly suggest that recency plays a role within the current subdialogue, but also that another factor must be invoked to explain the high frequency of antecedents observed in N-1 and in the root subdialogue. This other factor is topicality or importance (Guindon, 1985; Kintsch and van Dijk, 1978). A parent subdialogue describes information that is important to the information described in a subordinate subdialogue. Moreover, the antecedent statistical package was located at the "root" subdiMogue of the dialogue structure. In other words, it was one of the most important concepts mentioned in the dialogue and because of its importance stayed in the user's and adviser's short-term memory during the complete dialogue and could be referred to by using a pronoun. The allocation of short-term memory during discourse comprehension corresponds to the concept of attentional state (Grosz and Sidner, 1985) and is described in more detail in Guindon (1985). The task structure and the user's meta-plans correspond to the intentional structure described by Grosz and Sidner (1985). Note that the segmentation of the task into subtasks direct the segmentation of the dialogue into subdialogues and is also a determinant of focus shifts and the attentional state. The antecedent distribution for pronominal anaphors is consistent with the dialogue structure derived from the user's plans and goals and describe principled and psychologically valid constraints on the use of pronominal anaphors over an extended dialogue. As a consequence, the validity of the derived dialogue structure is increased. Anaphoric - Non-pronominal Definite Noun Phrases Selecting the proper antecedent for a non-pronominal definite noun phrase anaphor is less difficult than for pronominal anaphor since more semantic information is provided for matching the description of the antecedent. For this reason we would expect the distribution for antecedents of non-pronominal definite noun phrases to be far less constrained than the distribution for pronominal noun phrases. Figure 2 shows that this is the case. Definite noun phrase antecedents range over every dominant node N-1 through N-5 and over a few left-branching subordinate nodes. Nevertheless, there is a strong tendency for antecedents to be locally positioned in N and N-1. Their distribution is consistent with the derived dialogue structure on the basis of an analysis of the task and an analysis of the users' and adviser's plans and goals. BOUNDARY MARKERS The analysis of boundary markers revealed reliable indicators at the opening of subdialogues in adviser-user dialogues. This is shown in Table 3. The determined boundary markers were consistent with those found by Grosz (1977), Reichman (1981), and Polanyi and Scha (1983). The boundary markers can help identify three major types of subdialogues: I) plan-goal statement; 2) clarification; 3) acknowledgement. Acknowledgement subdialogues occur very frequently at the end of clarification subdialogues, also acting as closing boundary markers for clarification subdialogues. A more detailed analysis of the boundary markers is given in Guindon, Sladky, Brunner, and Conner (1986). A small subset of these markers for each type of discourse act is given in Table 3 (the symbol ~ > means optional, "or" is indicated as [ ( ) ( ) I, and ACTION means an instance from a class of actions). Subdialogue Types Boundary Markers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [Plan-goal statement] 1 .... <so>... I (want)(need)(have to) (am going to) (should) ... 2 .... let's [(try) (do)[... ACTION .... 3 .... I will ACTION .... Clarification 1. all types of interrogatives (e.g. How do I compute ..? What is a vector?) 2. negatives expressing lack of knowledge (e.g .... I do not know...; ...I do not remember ...; ...I am not sure...) 3. declaratives expressing uncertainty (e.g .... I assume that ...; ...it might be that ...) Acknowledgement 1. discourse particles (e.g. OK, Allright; Good) 2 .... I [(see)(understand)[ .... 3. repetition, restatement or elaboration of last adviser's utterance with clue words (e.g. In other words, ...; For instance ...) Table 2: EXAMPLES OF BOUNDARY MARKERS The boundary markers are part of the linguistic structure of dialogue, and so is the distribution of the non-pronominal and pronominal noun phrases. Both analyses are consistent with the derived dialogue structure on the basis of the task structure and the users' and adviser's plans and goals and they increase the validity of the derived dialogue structure. Both analyses also show that shifts in focus during discourse comprehension can be signalled in the surface form of the conversants' utterances. As a consequence, they can be capitalized upon by natural language interfaces. CONCLUSION Three independent converging analyses support the dialogue structure derived on the basis of the task structure and the users' and adviser's plans and goals. The distribution of the non- pronominal noun phrases shows that they occur more frequently at the beginning of subdialogues than later in the subdialogues, as should be expected if non-pronominal noun phrases introduce new entities in the dialogue or reinstate previous ones. The distribution of the pronominal noun phrases show that they occur less frequently in the first sentence than in the second sentence of the dialogue, as can be expected if they act as indicator of topic continuity. The distribution of pronominal antecedents shows that speakers are sensitive to the organization of a dialogue into a hierarchical structure composed of goal-oriented subdialogues. Antecedents of pronominal noun phrases tend to occur in the current subdialogue, in its parent, or in the root subdialogue. In particular, concepts mentioned in the current subdialogue, its parent, or in the root subdialogue tend to be in focus. In the case of non-pronominM definite noun phrase anaphors, while it is possible for antecedents to be much more widely spread across the dialogue, they also tend to be located in the current subdialogue or its parent. As a consequence, it would be possible to restrict and order the search for the antecedents of pronominal and non- pronominal definite noun phrases on the basis of the type of dialogue structure exemplified in this paper. The analysis of boundary markers reveals reliable and distinctive surface linguistic markers for different types of subdialogues. 229 The notion of a dialogue structure based on the task structure has been empirically supported. The notion of focusing and its relation to the segmentation of the dialogue into subdialogues has also been supported, especially by the antecedent distribution of the pronominal and non-pronominal noun phrases. The results of Guindon (1985) showing different anaphora resolution times for different types of anaphors with antecedent in or out of focus also support the refocusing" theories of anaphora resolution. This gives an impetus to include a model of the dialogue structure and a focusing mechanism in natural language interfaces. However, much further work has to be done to define precisely how the dialogue structure could be computed from the task structure and the meta-plans of the conversants and how precisely the anaphora resolution process would capitalize on this structure. REFERENCES Fox, A.B. 1985. Discourse Structure and Anaphora in Written and Conversational English. Dissertation submitted at the University of California, Los Angeles. van Dijk, T.A. & Kintseh, W. 1983. Strategies for Discourse Comprehension. Academic Press: New York. Grosz, B.J. 1977. The representation and use of focus in dialogue understanding. Technical Report 151, Artificial Intelligence Center, SRI International. Grosz, B.J., Joshi, A.K., Weinstein, S. 1983. Providing a Unified Account of Definite Noun Phrases in Discourse. Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, Boston, Massachusetts. Guindon, R. & Kintsch, W. 1984. Priming Macropropositions: Evidence for the Primacy of Macropropositions in the Memory for Text. Journal of Verbal Learning and Verbal Behavior, 28, 508-518. Guindon, R. 1985. Anaphora resolution: Short-term memory and focusing. Proceedings of the Association for Computational Linguistics, University of Chicago, Chicago. Guindon, R., Sladky, P., Brunner, H., Conner, J. 1986. The structure of user-adviser dialogues: Is there method in their madness ? Microelectronics and Computer Technology Technical Report (in preparation). Kintsch, W. & van Dijk, T.A. 1978. Toward a model of text comprehension and production. Psychological Review, 85, 363 - 394. Linde, C. 1979. Focus of attention and the choice of pronouns in discourse, in T. Givon (editor), Syntax and Semantics, Vol. 12 of Discourse and Syntax. Academic Press Inc. Litman, D.J. & Allen, J.F. 1984. A plan recognition model for subdialogues in conversations. Technical Report 141, Department of Computer Science, University of Rochester. Polanyi, L. & Scha, R.J.H. 1983. The syntax of discourse. Text 3 (3). Reichman, R. 1981. Plan speaking: A theory and grammar of spontaneous discourse. Technical Report 4681, Bolt, Beranek, and Newman, Inc. Suchman, L.A. 1985. Plans and situated actions: The problem of human-machine communication. Xerox Corporation Technical Report. Sidner, C.L. 1983. Focusing in the comprehension of definite anaphora. In M. Brady (Ed.), Computational Models of Discourse. MIT Press, Sidner, C.L. & Grosz, B.J. 1985. Discourse Structure and the Proper Treatment of Interruptions. Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, California. 230
1986
34
COMMONSENSE METAPHYSICS AND LEXICAL SEMANTICS Jerry R. Hobbs, William Croft, Todd Davies, Douglas Edwards, and Kenneth Laws Artificial Intelligence Center SRI International 1 Introduction In the TACITUS project for using commonsense knowl- edge in the understanding of texts about mechanical de- vices and their failures, we have been developing various commonsense theories that are needed to mediate between the way we talk about the behavior of such devices and causal models of their operation. Of central importance in this effort is the axiomatization of what might be called "commonsense metaphysics". This includes a number of areas that figure in virtually every domain of discourse, such as scalar notions, granularity, time, space, material, physical objects, causality, functionality, force, and shape. Our approach to lexical semantics is then to construct core theories of each of these areas, and then to define, or at least characterize, a large number of lexical items in terms provided by the core theories. In the TACITUS system, processes for solving pragmatics problems posed by a text will use the knowledge base consisting of these theories in conjunction with the logical forms of the sentences in the text to produce an interpretation. In this paper we do not stress these interpretation processes; this is another, important aspect of the TACITUS project, and it will be described in subsequent papers. This work represents a convergence of research in lexical semantics in linguistics and efforts in AI to encode com- monsense knowledge. Lexical semanticists over the years have developed formalisms of increasing adequacy for en- coding word meaning, progressing from simple sets of fea- tures (Katz and Fodor, 1963) to notations for predicate- argument structure (Lakoff, 1972; Miller and Johnson- Laird, 1976), but the early attempts still limited access to world knowledge and assumed only very restricted sorts of processing. Workers in computational linguistics intro- duced inference (Rieger, 1974; Schank, 1975) and other complex cognitive processes (Herskovits, 1982) into our understanding of the role of word meaning. Recently, lin- guists have given greater attention to the cognitive pro- cesses that would operate on their representations (e.g., Talmy, 1983; Croft, 1986). Independently, in AI an ef- fort arose to encode large amounts of commonsense knowl- edge (Hayes, 1979; Hobbs and Moore, 1985; Hobbs et al. 1985). The research reported here represents a conver- gence of these various developments. By developing core theories of several fundamental phenomena and defining lexical items within these theories, using the full power of predicate calculus, we are able to cope with complex- ities of word meaning that have hitherto escaped lexical semanticists, within a framework that gives full scope to the planning and reasoning processes that manipulate rep- resentations of word meaning. In constructing the core theories we are attempting to adhere to several methodological principles. I. One should aim for characterization of concepts, rather than definition. One cannot generally expect to find necessary and sufficient conditions for a concept. The most we can hope for is to find a number of necessary condi- tions and a number of sufficient conditions. This amounts to saying that a great many predicates are primitive, but primitives that are highly interrelated with the rest of the knowledge base. 2. One should determine the minimal structure neces- sary for a concept to make sense. In efforts to axiomatize some area, there are two positions one may take, exem- plified by set theory and by group theory. In axiomatiz- ing set theory, one attempts to capture exactly some con- cept one has strong intuitions about. If the axiomatization turns out to have unexpected models, this exposes an in- adequacy. In group theory, by contrast, one characterizes an abstract class of structures. If there turn out to be unexpected models, this is a serendipitous discovery of a new phenomenon that we can reason about using an old theory. The pervasive character of metaphor in natural language discourse shows that our commonsense theories of the world ought to be much more like group theory than set theory. By seeking minimal structures in axiomatizing concepts, we optimize the possibilities of using the theories in metaphorical and analogical contexts. This principle is illustrated below in the section on regions. One conse- quence of this principle is that our approach will seem more syntactic than semantic. We have concentrated more on 231 specifying axioms than on constructing models. Our view is that the chief role of models in our effort is for proving the consistency and independence of sets of axioms, and for showing their adequacy. As an example of the last point, many of the spatial and temporal theories we construct are intended at least to have Euclidean space or the real numbers as one model, and a subclass of graph-theoretical structures as other models. 3. A balance must be struck between attempting to cover all cases and aiming only for the prototypical cases. In general, we have tried to cover as many cases as pos- sible with an elegant axiomatization, in line with the two previous principles, but where the formalization begins to look baroque, we assume that higher processes will suspend some inferences in the marginal cases. We assume that in- ferences will be drawn in a controlled fashion. Thus, every outr~, highly context-dependent counterexample need not be accounted for, and to a certain extent, definitions can be geared specifically for a prototype. 4. Where competing ontologies suggest themselves in a domain, one should attempt to construct a theory that ac- commodates both. Rather than commit oneself to adopt- ing one set of primitives rather than another, one should show how each set of primitives can be characterized in terms of the other. Generally, each of the ontologies is useful for different purposes, and it is convenient to be able to appeal to both. Our treatment of time illustrates this. 5. The theories one constructs should be richer in axioms than in theorems. In mathematics, one expects to state half a dozen axioms and prove dozens of theorems from them. In encoding commonsense knowledge it seems to be just the opposite. The theorems we seek to prove on the basis of these axioms are theorems about specific situations which are to be interpreted, in particular, theorems about a text that the system is attempting to understand. 6. One should avoid falling into "black holes". There are a few "mysterious" concepts which crop up repeatedly in the formalization of commonsense metaphysics. Among these are "relevant" (that is, relevant to the task at hand) and "normative" (or conforming to some norm or pattern). To insist upon giving a satisfactory analysis of these before using them in analyzing other concepts is to cross the event horizon that separates lexical semantics from philosophy. On the other hand, our experience suggests that to avoid their use entirely is crippling; the lexical semantics of a wide variety of other terms depends upon them. Instead, we have decided to leave them minimally analyzed for the moment and use them without scruple in the analysis of other commonsense concepts. This approach will allow us to accumulate many examples of the use of these mysteri- ous concepts, and in the end, contribute to their success- fill analysis. The use of these concepts appears below in the discussions of the words "immediately", "sample", and "operate". We chose as an initial target problem to encode the com- monsense knowledge that underlies the concept of "wear", as in a part of a device wearing out. Our aim was to define "wear" in terms of predicates characterized elsewhere in the knowledge base and to infer consequences of wear. For something to wear, we decided, is for it to lose impercepti- ble bits of material from its surface due to abrasive action over time. One goal,which we have not yet achieved, is to be able to prove as a theorem that since the shape of a part of a mechanical device is often functional and since loss of material can result in a change of shape, wear of a part of a device can result in the failure of the device as a whole. In addition, as we have proceded, we have characterized a number of words found in a set of target texts, as it has become possible. We are encoding the knowledge as axioms in, what is for the most part a first-order logic, described in ttobbs (1985a), although quantification over predicates is some- times convenient. In the formalism there is a nominaliza- tion operator " ' " for reifying events and conditions, as expressed in the following axiom schema: (¥x)p(x) - (3e)p'(e, x) A Exist(e) That is, p is true of x if and only if there is a condition e of p being true of z and e exists in the real world. In our implementation so far, we have been proving sim- ple theorems from our axioms using the CG5 theorem- prover developed by Mark Stickel (1982), but we are only now beginning to use the knowledge base in text process- ing. 2 Requirements on Arguments of Predicates There is a notational convention used below that deserves some explanation. It has frequently been noted that re- lational words in natural language can take only certain types of words as their arguments. These are usually de- scribed as selectional constraints. The same is true of pred- icates in our knowledge base. They are expressed below by rules of the form p(x, y) : ~(x, ~) This means that for p even to make sense applied to x and y, it must be the case that r is true of x and y. The logical import of this rule is that wherever there is an axiom of the form (Vx, y)p(x, y) ~ q(x, y) this is really to be read as (Vx, y)p(x,y) A r(x,y) D q(x,y) 232 The checking of selectional constraints, therefore, falls out as a by-product of other logical operations: the constraint r(z, y) must be verified if anything else is to be proven from p(x, y). The simplest example of such an r(:L y) is a conjunction of sort constraints rl (x) ^ re(y). Our approach is a gener- alization of this, because much more complex requirements can be placed on the arguments. Consider, for example, the verb "range". If z ranges from y to z, there must be a scale s that includes y and z, and z must be a set of en- tities that are located at various places on the scale: This can be represented as follows: range(x, y, z) : (3 s)scate(e) ^ y G s Az E e A set(x) A(Vu)[u G z D (qv)v E s A at(u,v)] 3 The Knowledge Base 3.1 Sets and Granularity At the foundation of the knowledge base is an axiomatiza- tion of set theory. It follows the standard Zermelo-Frankel approach, except that there is no Axiom of Infinity. Since so many concepts used in discourse are grain- dependent, a theory of granularity is also fundamental (see Hobbs 1985b). A grain is defined in terms of an indistin- guishability relation, which is reflexive and symmetric, but not necessarily transitive. One grain can be a refinement of another with the obvious definition. The most refined grain is the identity grain, i.e., the one in which every two distinct elements are distinguishable. One possible rela- tionship between two grains, one of which is a refinement of the other, is what we call an ~Archimedean relation", after the Archimedean property of real numbers. Intu- itively, if enough events occur that are imperceptible at the coarser grain g2 but perceptible at the finer grain gl, then the aggregate will eventually be perceptible at the coarser grain. This is an important property in phenomena sub- ject to the Heap Paradox. Wear, for instance, eventually has significant consequences. 3.2 Scales A great many of the most common words in English have scales as their subject matter. This includes many preposi- tions, the most common adverbs, comparatives, and many abstract verbs. When spatial vocabulary is used metaphor- ically, it is generally the scalar aspect of space that carries over to the target domain. A scale is defined as a set of elements, together with a partial ordering and a granular- ity (or an indistinguishability relation). The partial or- dering and the indistinguishability relation are consistent with each other: (Vx, y,z)x < y A y~ z D x < z V z ,~ z It is useful to have an adjacency relation between points on a scale, and there are a number of ways we could introduce it. We could simply take it to be primitive; in a scale having a distance function, we could define two points to be adjacent when the distance between them is less than some ~; finally, we could define adjacency in terms of the grain-size: (V x, y, e)adj(x, y, e) --- (3 z)z ~ z ^ z ~ y ^ ~[x ~ y], Two important possible properties of scales are connect- edness and denseness. We can say that two elements of a scale are connected by a chain of adj relations: (v~, y, s)co.nected(z, y, e) - adj(x, y, e) V (3 z)adj(x, z, e) ^ connected(z, y, e) A scale is connected (econneeted) if all pairs of elements are connected. A scale is dense if between any two points there is a third point, until the two points are so close together that the grain-size won't let us tell what the situ- ation is. Cranking up the magnification could well resolve the continuous space into a discrete set, as objects into atoms. (Ys)dense(s) = (Vz, y,<)x E s A y E s A order(<,s) A z < y (3 z)(~ < z ^ z < y) v(3z)(z ~ z ^ z~y) This captures the commonsense notion of continuity. A subscale of a scale has as its elements a subset of the elements of the scale and has as its partial ordering and its grain the partial ordering and the grain of the scale. (Vs,, <, ,..)order(<, e,) A grain(~, e,) (Vs~)[subscate(ee, e,) = subset(sz, el) A order(<, ez) A grain(~, sz)] An interval can be defined as a connected subseale: (V i)interval(i) - (3 s)ecale(s) A subseale(i, e) ^ econnected(i) The relations between time intervals that Allen and Kautz (1985) have defined can be defined in a straight- forward manner in the approach presented here, applied to intervals in general. A concept closely related to scales is that of a "cycle". This is a system which has a natural ordering locally but contains a loop globally. Examples include the color wheel, clock times, and geographical locations ordered by "east of". We have axiomatized cycles i~ terms of a ternary between relation, whose axioms parallel the axioms for a partial ordering. The figure-ground relationship is of fundamental impor- tance in language. We encode this with the primitive pred- icate at. The minimal structure that seems to be necessary for something to be a ground is that of a scale; hence, this is a selectional constraint on the arguments of at. 233 at(z, y) : (B s)y E s ^ scale(s) At this point, we are already in a position to define some fairly complex words. As an illustration, we give the ex- ample of "range" as in "x ranges from y to z": (Vz, y, z)range{x, y, z) - (3 s, s,, u,, u2)scale(s) ^ subscale(sl, s) ^bottom(y, sl) ^ top(z, sl) Aul E x A at(ul,y) ^u2 E z ^ at(u2,z) ^(vu)I. e • ~ Ov)v e ~, ^ at(u,v)l A very important scale is the linearly ordered scale of numbers. We do not plan to reason axiomatically about numbers, but it is useful in natural language processing to have encoded a few facts about numbers. For example, a set has a cardinality which is an element of the number scale. Verticality is a concept that would be most properly an- alyzed in the section on space, but it is a property that many other scales have acquired metaphorically, for what- ever reason. The number scale is one of these. Even in the absence of an analysis of verticality, it is a useful property to have as a primitive in lexical semantics. The word "high" is a vague term that asserts an entity is in the upper region of some scale. It requires that the scale be a vertical one, such as the number scale. The vertical- ity requirement distinguishes "high" from the more gen- eral term "very"; we can say "very hard" but not "highly hard". The phrase "highly planar" sounds all right be- cause the high register of "planar" suggests a quantifiable, scientific accuracy, whereas the low register of "fiat" makes "highly fiat" sound much worse. The test of any definition is whether it allows one to draw the appropriate inferences. In our target texts, the phrase "high usage" occurs. Usage is a set of using events, and the verticality requirement on "high" forces us to coerce the phrase into "a high or large number of using events". Com- bining this with an axiom that says tb~t the use of a me- chanical device involves the likelihood of abrasive events, as defined below, and with the definition of "wear" in terms of abrasive events, we should be able to conclude the like- lihood of wear. 3.3 Time: Two Ontologies There are two possible ontologies for time. In the first, the one most acceptable to the mathematically minded, there is a time line, which is a scale having some topological structure. We can stipulate the time line to be linearly ordered (although it is not in approaches that build ig- norance of relative times into the representation of time (e.g., Hobbs, 1974) nor in approaches using branching fu- tures (e.g., McDermott, 1985)), and we can stipulate it to be dense (although it is not in the situation calculus). We take before to be the ordering on the time line: (V ti, t2)be f ore(t~, tz) - (3 T, <)Time-line(T) ^ order(<, T) Atl ET A t2ET A tl <t2 We allow both instants and intervals of time. Most events occur at some instant or during some interval. In this approach, nearly every predicate takes a time argument. In the second ontology, the one that seems to be more deeply rooted in language, the world consists of a large number of more or less independent processes, or histories, or sequences of events. There is a primitive relation change between conditions. Thus, change(el, ez) ^ p'(el, x) A q'(ez, x) says that there is a change from the condition el of p being true of z to the condition e2 of q being true of x. The time line in this ontology is then an artificial con- struct, a regular sequence of imagined abstract events-- think of them as ticks of a clock in the National Bureau of Standards--to which other events can be related. The change ontology seems to correspond to the way we ex- perience the world. We recognize relations of causality, change of state, and copresence among events and condi- tions. When events are not related in these ways, judg- ments of relative time must be mediated by copresence relations between the events and events on a clock and change of state relations on the clock. The predicate change possesses a limited transitivity. There has been a change from Reagan being an actor to Reagan being President, even though he was governor in between. But we probably do not want to say there has been a change from Reagan being an actor to Margaret Thatcher being Prime Minister, even though the second comes after the first. We can say that times, viewed in this ontology as events, always have a change relation between them. (Vtl, tz)before(tl, tz) D change(tl, t2) The predicate change is related to before by the axiom (Vel, ez)change(el, e2) D (3 tl, tz)at(el, t~) A at(e2, t2) A before(q, t2) This does not allow us to derive change of state from tem- poral succession. For this, we need axioms of the form (Vet, e:, t,, t2, z)p'(el, z) ^ at(e,, t,) ^q'(e2, x) A at(ez, tz) ^ before(q, tz) D change(el, ez) That is, if z is p at time tl and q at a later time t2, then there has been a change of state from one to the other. Time arguments in predications can be viewed as abbrevi- ations: (Vx, t)p(z,t) =- (qe)p'(e,x) ^ at(e,t) 234 The word "move", or the predicate move, (as in "x moves from y to z') can then be defined equivalently in terms of change (Vx, y, z)move(x, y, z) - (3 el, e2)change(el , e2) A at'(e,, z, y) A at'(e2, x, z) or in terms of the time line (V x, y, z)move(x, y, z) =-- (3 tl, t2)at(x, y, tl) A at(x, z, 12) A before(ti, t2) In English and apparently all other natural languages, both ontologies are represented in the lexicon. The time line ontology is found in clock and calendar terms, tense systems of verbs, and in the deictic temporal locatives such as "yesterday", "today", "tomorrow", "last night", and so on. The change ontology is exhibited in most verbs, and in temporal clausal connectives. The universal presence of both classes of lexical items and grammatical mark- ers in natural languages requires a theory which can ac- commodate both ontologies, illustrating the importance of methodological principle 4. Among temporal connectives, the word "while" presents interesting problems. In "el while e~', e2 must be an event occurring over a time interval; el must be an event and may occur either at a point or over an interval. One's first guess is that the point or interval for el must be included in the interval for e2. However, there are cases, such as or It rained while I was in Philadelphia. The electricity should be off while the switch is being repaired. which suggest the reading "ez is included in el". We came to the conclusion that one can infer no more than that el and ez overlap, and any tighter constraints result from implicatures from background knowledge. The word "immediately" also presents a number of prob- lems. It requires its argument e to be an ordering relation between two entities x and y on some scale s. immediate(e) : (3 x, y, s)less-than'(e, x, y, s) It is not clear what the constraints on the scale are. Tem- poral and spatial scales are okay, as in "immediately after the alarm" and "immediately to the left", but the size scale isn't: * John is immediately larger than Bill. Etymologically, it means that there are no intermediate entities between x and y on s. Thus, (V e, x, y, s)immediate(e) A less-than'(e, x, y, s) D -.(3 z)less-than(x, z, s) A less-than(z, y, s) [5 A/.- Figure 1: The simplest space. However, this will only work if we restrict z to be a relevant entity. For example, in the sentence We disengaged the compressor immediately after the alarm. the implication is that no event that could damage the compressor occurred between the alarm and the disengage- ment, since the text is about equipment failure. 3.4 Spaces and Dimension: The Minimal Structure The notion of dimension has been made precise in linear al- gebra. Since the concept of a region is used metaphorically as well as in the spatial sense, however, we were concerned to determine the minimal structure that a system requires for it to make sense to call it a space of more than one dimension. For a two-dimensional space, l~re must be a scale, or partial ordering, for each dimension. Moreover, the two scales must be independent, in that the order of elements on one scale can not be determined from their order on the other. Formally, (Vsp)spaee(sp) =-- (3 sl, s2, <1, <2)scalel(sl, sp) A scalez(s2, sp) ^ order(<1, sl) h order(<2, sz) A(3z)(3y,)(z <, y, A z <2 Y,) A (3 ~)(z <, y~ A y~ <2 z) Note that this does not allow <2 to be simply the reverse of <1. An unsurprising consequence of this definition is that the minimal example of a two-dimensional space consists of three points {three points determine a plane), e.g., the points A, B, and C, where A<IB, A<IC, C<2A, A<2B. This is illustrated in Figure 1. The dimensional scales are apparently found in all nat- ural languages in relevant domains. The familiar three- dimensional space of common sense is defined by the three scale pairs "up-down", "front-back", and "left-right"; the two-dimensional plane of the commonsense conception of the earth's surface is represented by the two scale pairs "north-south" and "east-west". 235 The simplest, although not the only, way to define ad- jacency in the space is as adjacency on both scales: (Vz, y, sp)adi(z , y, sp) =- (3 s~, s2)scalel(sl, sp) A scale2(s~, sp) Aadj(x,y, sl) A adj(x,y, s2) A region is a subset of a space. The surface and interior of a region can be defined in terms of adjacency, in a manner paralleling the definition of a boundary in point-set topol- ogy. In the following, s is the boundary or surface of a two- or three-dimensional region r embedded in a space sp. (Vs, r)surf ace(s, r, sp) =__ (Vz)z~r~[zes = (Ey)(y e sp A -~(y e r) ^ adi(z, y, sp))] Finally, we can define the notion of "contact" in terms of points in different regions being adjacent. (Vrl, r~, sp)contact(rl , r2, sp) - disjoint(rl, r2) A (Ez, y)(z e r, Aye r2 A adj(z,y, sp)) By picking the scales and defining adjacency right, we can talk about points of contact between communicational networks, systems of knowledge, and other metaphorical domains. By picking the scales to be the real line and defining adjacency in terms of e-neighborhoods, we get Eu- clidean space and can talk about contact between physical objects. 3.5 Material Physical objects and materials must be distinguished, just as they are apparently distinguished in every natural lan- guage, by means of the count noun - mass noun distinc- tion. A physical object is not a bit of material, but rather is comprised of a bit of material at any given time. Thus, rivers and human bodies are physical objects, even though their material constitution changes over time. This distinc- tion also allows us to talk about an object losing material through wear and still being the same object. We will say that an entity b is a bit of material by means of the expression material(b). Bits of material are char- acterized by both extension and cohesion. The primitive predication occupies(b, r, t} encodes extension, saying that a bit of material b occupies a region r at time t. The topol- ogy of a bit of material is then parasitic on the topology of the region it occupies. A part bl of a bit of material b is a bit of material whose occupied region is always a subregion of the region occupied by b. Point-like particles (particle} are defined in terms of points in the occupied region, dis- joint bits {disjointbit) in terms of disjointness of regions, and contact between bits in terms of contact between their regions. We can then state as follows the Principle of Non- Joint-Occupancy that two bits of material cannot occupy the same place at the same time: (Vb~, b2)(disjointbit(b~, bz) D (Vx, y, bs, b4)interior(bs, b~) A interior(b4, bz) ^ particle(z, bs) A particle(y, b4) D ~(Ez)(at(z, z) ^ at(y, z)) At some future point in our work, this may emerge as a consequence of a richer theory of cohesion and force. The cohesion of materials is also a primitive property, for we must distinguish between a bump on the surface of an object and a chip merely lying on the surface. Cohesion depends on a primitive relation bond between particles of material, paralleling the role of adj in regions. The relation attached is defined as the transitive closure of bond. A topology of cohesion is built up in a manner analogous to the topology of regions. In addition, we have encoded the relation that bond bears to motion, i.e. that bonded bits remain adjacent and that one moves when the other does, and the relation of bond to force, i.e. that there is a characteristic force that breaks a bond in a given material. Different materials react in different ways to forces of various strengths. Materials subjected to force exhibit or fail to exhibit several invariance properties, proposed by linger (1985). If the material is shape-invariant with re- spect to a particular force, its shape remains the same. If it is topologically invariant, particles that are adjacent remain adjacent. Shape invariance implies topological in- variance. Subject to forces of a certain strength or de- gree dl, a material ceases being shape-invariant. At a force of strength dz _> dl, it ceases being topologically invariant, and at a force of strength ds >_ dz, it sim- ply breaks. Metals exhibit the full range of possibilities, that is, 0 < dl < d2 < ds < co. For forces of strength d < dr, the material is "hard"; for forces of strength d where d~ < d < d~, it is "flexible"; for forces of strength d where d2 < d < ds, it is "malleable". Words such as "ductile" and "elastic" can be defined in terms of this vo- cabulary, together with predicates about the geometry of the bit of material. Words such as "brittle" (all = d2 = ds) and "fluid" (d2 = 0, d3 = ~) can also be defined in these terms. While we should not expect to be able to define various material terms, like "metal" and "ceramic", we can certainly characterize many of their properties with this vocabulary. Because of its invariance properties, material interacts with containment and motion. The word "clog" illustrates this. The predicate clog is a three-place relation: z clogs y against the flow of z. It is the obstruction by z of z's motion through y, but with the selectional restriction that z must be something that can flow, such as a liquid, gas, or powder. If a rope is passing through a hole in a board, and a knot in the rope prevents it from going through, we do not say that the hole is clogged. On the other hand, there do not seem to be any selectional constraints on z. In particular, x can be identical with z: glue, sand, or molasses can clog a passageway against its own flow. We 236 can speak of clogging where the obstruction of flow is not complete, but it must be thought of as "nearly" complete. 3.6 Other Domains 3.6.1 Causal Connection Attachment within materials is one variety of causal con- nection. In general, if two entities x and y are causally connected with respect to some behavior p of x, then when- ever p happens to x, there is some corresponding behavior q that happens to y. In the case of attachment, p and q are both move. A particularly common variety of causal connection between two entities is one mediated by the mo- tion of a third entity from one to the other. (This might be called a "vector boson" connection.) Photons medi- ating the connection between the sun and our eyes, rain drops connecting a state of the clouds with the wetness of our skin and clothes, a virus being transmitted from one person to another, and utterances passing between peo- ple are all examples of such causal connections. Barriers, openings, and penetration are all with respect to paths of causal connection. 3.6.2 Force The concept of "force" is axiomatized, in a way consistent with Talmy's treatment (1985), in terms of the predica- tions force(a, b, dz) and resist(b, a, d2)--a forces against b with strength dl and b resists a's action with strength d2. We can infer motion from facts about relative strength. This treatment can also be specialized to Newtonian force, where we have not merely movement, but acceleration. In addition, in spaces in which orientation is defined, forces can have an orientation, and a version of the Parallelogram of Forces Law can be encoded. Finally, force interacts with shape in ways characterized by words like "stretch", "com- press", "bend", "twist", and "shear". 3.6.3 Systems and Functionality An important concept is the notion of a "system", which is a set of entities, a set of their properties, and a set of relations among them. A common kind of system is one in which the entities are events and conditions and the relations are causal and enabling relations. A mechanical device can be described as such a system--in a sense, in terms of the plan it executes in its operation. The function of various parts and of conditions of those parts is then the role they play in this system, or plan. The intransitive sense of "operate", as in The diesel was operating. involves systems and functionality. If an entity x oper- ates, then there must be a larger system s of which x is a part. The entity x itself is a system with parts. These parts undergo normative state changes, thereby causing x to undergo normative state changes, thereby causing x to produce an effect with a normative function in the larger system s. The concept of "normative" is discussed below. 3.6.4 Shape We have been approaching the problem of characterizing shape from a number of different angles. The classical treatment of shape is via the notion of "similarity" in Eu- clidean geometry, and in Hilbert's formal reconstruction of Euclidean geometry (Hilbert, 1902) the key primitive con- cept seems to be that of "congruent angles". Therefore, we first sought to develop a theory of "orientation". The shape of an object can then be characterized in terms of changes in orientation of a tangent as one moves about on the surface of the object, as is done in vision research (e.g., Zahn and Roskies, 1972). In all of this, since "shape" can be used loosely and metaphorically, one question we are asking is whether some minimal, abstract structure can be found in which the notion of "shape" makes sense. Con- sider, for instance, a graph in which one scale is discrete, or even unordered. Accordingly, we have been examining a number of examples, asking when it seems right ~.o say two structures have different shapes. We have also examined the interactions of shape and functionality (cf. Davis, 1984). What seems to be cru- cial is how the shape of an obstacle constrains the motion of a substance or of an object of a particular shape (cf. Shoham, 1985). Thus, a funnel concentrates the flow of a liquid, and similarly, a wedge concentrates force. A box pushed against a ridge in the floor will topple, and a wheel is a limiting case of continuous toppling. 3.7 Hitting, Abrasion, Wear, and Re- lated Concepts For x to hit y is for x to move into contact with y with some force. The basic scenario for an abrasive event is that there is an impinging bit of material m which hits an object o and by doing so removes a pointlike bit of material b0 from the surface of o: abr-event'(e, m, o, b0) : material(m) A topologieally.invariant(o) (re, m, o, bo)abr-event'(e, m, o, bo) =--- (3 t, b, s, bo, el, e,, es)at(e, t) ^ consists-of(o, b, t) ^ surface(s, b) ^ particle(bo, s) ^ change'(e, el, e~) ^ attached'(el, bo, b) ^ not'(e2, el) A cause(es, e) ^ hit'(es, m, bo) After the abrasive event, the pointlike bit b0 is no longer a part of the object o: 237 (re, m, o, bo, el, e2, t2)abr-event'(e, m, o, b0) A change'(e, el, ez) ^ attaehed'(el, bo, b) ^ not'(e2, el) A at(ez, tz) A consists-of(o, bz, tz) D -~part(bo, bz) It is necessary to state this explicitly since objects and bits of material can be discontinuous. An abrasion is a large number of abrasive events widely distributed through some nonpointlike region on the sur- face of an object: (Ve, m, o}abrade'(e, m, o) - (:lbs)[(¥e,)[e, e e ::) (3 bo)bo e bs ^ abr-evenr(el, m, o, bo)] ^(Vb, s,t)[at(e,t) ^ consists-of(o, b, t) A surface(s, b) D (B r)subregion(r, s) A widely-distributed(bs, r)]] Wear can occur by means of a large collection of abrasive events distributed over time as well as space (so that there may be no time at which enough abrasive events occur to count as an abrasion). Thus, the link between wear and abrasion is via the common notion of abrasive events, not via a definition of wear in terms of abrasion. (re, m, o)wear'(e, z, o) =-- (3bs)(VeO[el E e D (3 b0}b0 E bs) A abr-event'(el, m, o, b0)] A (3 i)[interval(i) A widely-distributed(e, i)] The concept "widely distributed" concerns systems. If z is distributed in y, then y is a system and z is a set of entities which are located at components of y. For the distribution to be wide, most of the elements of a partition of y determined independently of the distribution must contain components which have elements of x at them. The word "w~ar" is one of a large class of other events involving cumulative, gradual loss of material - events de- scribed by words like "chip", "corrode", "file", "erode", "rub", "sand", "grind", "weather", "rust", "tarnish", "eat away", "rot", and "decay". All of these lexical items can now be defined as variations on the definition of "wear", since we have built up the axiomatizations underlying "wear". We are now in a position to characterize the en- tire class. We will illustrate this by defining two different types of variants of "wear" - "chip" and "corrode". "Chip" differs from "wear" in three ways: the bit of material removed in one abrasive event is larger {it need not be point-like}, it need not happen because of a mate- rial hitting against the object, and "chip" does not require (though it does permit} a large collection of such events: one can say that some object is chipped if there is only one chip in it. Thus, we slightly alter the definition of abr-event to accommodate these changes: (re, m, o, bo)chip'(e, m, o, bo) ---- (3 t, b, s, b0, el, e2, es)at(e, t) A consists-of(o, b, t) A surface(s, b) Apart(bo, s) A change'(e, el, ez) A attached'(e~, bo, b) A not'(e2, el) "Corrode" differs from "wear" in that the bit of material is chemically transformed as well as being detached by the contact event; in fact, in some way the chemical transfor- mation causes the detachment. This can be captured by adding a condition to the abrasive event which renders it a (single} corrode event: corrode-event(m, o, bo) : fluid(m) ^ contact(m, bo) (Ve, m, o, bo)corrode-event'(e, m, o, bo) = (3 t, b, s, bo, el, e2, es)at(e, t) ^ consists-of(o, b, t) ^ surface(s, b} ^ particle(bo, s) ^ change'(e, el, ez) ^ attached'(el, bo, b) ^ not'(e2, el ) ^ cause(e3, e) A chemical-change'(es, m, bo) "Corrode" itself may be defined in a parallel fashion to "wear", substituting corrode-event for abr-event. All of this suggests the generalization that abrasive events, chipping and corrode events all detach the bit in question, and that we may describe all of these as detach- ing events. We can then generalize the above axiom about abrasive events resulting in loss of material to the following axiom about detaching: (re, m, o, bo, bz, el, ez, tz)detach'(e, m, o, b0) ^ change'(e, el, ez) ^ attached'(el, bo, b) ^not'(e2, el) A at(ez, tz) A consists-of(o, bz, tz) D ~(part(bo, b2)) 4 Relevance and the Normative Many of the concepts we are investigating have driven us inexorably to the problems of what is meant by "relevant" and by "normative". We do not pretend to have solved these problems. But for each of these concepts we do have the beginnings of an account that can play a role in anal- ysis, if not yet in implementation. Our view of relevance, briefly stated, is that something is relevant to some goal if it is a part of a plan to achieve that goal. [A formal treatment of a similar view is given in Davies and Russell, 1986.) We can illustrate this with an example involving the word "sample". If a bit of material z is a sample of another bit of material y, then x is a part of y, and moreover, there are relevant properties p and q such that it is believed that if p is true of x then q is true of y. That is, looking at the properties of the sample tells us something important about the properties of the whole. Frequently, p and q are the same property. In our target texts, the following sentence occurs: 238 We retained an oil sample for future inspection. The oil in the sample is a part of the total lube oil in the lube oil system, and it is believed that a property of the sample, such as "contaminated with metal particles", will be true of all of the lube oil as well, and that this will give information about possible wear on the bearings. It is therefore relevant to the goal of maintaining the machinery in good working order. We have arrived at the following provisional account of what it means to be "normative". For an entity to exhibit a normative condition or behavior, it must first of all be a component of a larger system. This system has structure in the form of relations among its components. A pat- tern is a property of the system, namely, the property of a subset of these stuctural relations holding. A norm is a pattern which is established either by conventional stipula- tion or by statistical regularity. An entity is behaving in a normative fashion if it is a component of a system and in- stantiates a norm within that system. The word "operate" given above illustrates this. When we say that an engine is operating, we have in mind a larger system, the device the engine drives, to which the engine may bear various possible relations. A subset of these relations is stipulated to be the norm--the way it is supposed to work. We say it is operating when it is instantiating this norm. 5 Conclusion The research we have been engaged in has forced us to ex- plicate a complex set of commonsense concepts. Since we have done it in as general a fashion as possible, we may expect that it will be possible to axiomatize a large num- ber of other areas, including areas unrelated to mechanical devices, building on this foundation. The very fact that we have been able to characterize words as diverse as "range", "immediately", "brittle", "operate" and "wear" shows the promise of this approach. Acknowledgements The research reported here was funded by the Defense Ad- vanced Research Projects Agency under Omce of Naval Research contract N00014-85-C-0013. It builds on work supported by NIH Grant LM03611 from the National Li- brary of Medicine, by Grant IST-8209346 from the Na- tional Science Foundation, and by a gift from the Systems Development Foundation. References Ill Allen, James F., and Henry A. Kautz. 1985. "A model of naive temporal reasoning." Formal Theories of the Commonsense World, ed. by Jerry R. Hobbs and Robert C. Moore, Ablex Publishing Corp., 251-268. [2] Croft, William. 1986. Categories and Relations in Syn- tax: The Clause-Level Organization of Information. Ph.D. dissertation, Department of Linguistics, Stanford University. [3] Davies, Todd R., and Stuart J. Russell. 1986. "A logi- cal approach to reasoning by analogy." Submitted to the AAAI-86 Fifth National Conference on Artificial Intel- ligence, Philadelphia, Pennsylvania. [4] Davis, Ernest. 1984. "Shape and Function of Solid Ob- jects: Some Examples." Computer Science Technical Report 137, New York University. October 1984. [5] Hager, Greg. 1985. "Naive physics of materials: A re- con mission." In Commonsense Summer." Final Report, Report No. CSLI-85-35, Center for the Study of Lan- guage and Information, Stanford University. [6] Hayes, Patrick J. 1979. "Naive physics manifesto." Ex- pert Systems in the Micro-electronic Age, ed. by Donald Michie, Edinburgh University Press, pp. 242-270. [7] Herskovits, Annette. 1982. Space and the Prepositions in English: Regularities and Irregularities in a Complex Domain. Ph.D. dissertation, Department of Linguistics, Stanford University. [8] Hilbert, David. 1902. The Foundatiov~ of Geometry. The Open Court Publishing Company. [9] Hobbs, Jerry R. 1974. "A Model for Natural Language Semantics, Part I: The Model." Research Report #36, Department of Computer Science, Yale University. Oc- tober 1974. [10] Hobbs, Jerry R. 1985a. "Ontological promiscuity." Proceedings, 23rd Annual Meeting of the Association for Computational Linguistics, pp. 61-69. [11] Hobbs, Jerry R. 1985b."Granularity." Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, California, August 1985, 432- 435. [12] Hobbs, Jerry R. and Robert C. Moore, eds. 1985. For- real Theories of the Commonsense World, Ablex Pub- lishing Corp. [13] Hobbs, Jerry R. et al. 1985. Commonsense Summer: Final Report, Report No. CSLI-85-35, Center for the Study of Language and Information, Stanford Univer- sity. [14] Katz, Jerrold J. and Jerry A. Fodor. 1963. "Tile stru- ture of a semantic theory." Language, Vol. 39 (April- June 1963), 170-210. 239 [15] Lakoff, G. 1972. "Linguistics and natural logic". Se- mantics of Natural Language, ed. by Donald Davidson and Gilbert Harman, 545-665. [16] McDermott, Drew. 1985. "Reasoning about plans." Formal Theories of the Commonsense World, ed. by Jerry R. Hobbs and Robert C. Moore, Ablex Publishing Corp., 269-318. [17] Miller, George A. and Philip N. Johnson-Laird. 1976. Language and Pereeption, Belknap Press. [18] Rieger, Charles J. 1974. "Conceptual memory: A the- ory and computer program for processing and meaning content of natural language utterances." Stanford AIM- 233, Department of Computer Science, Stanford Univer- sity. [19] Schank, Roger. 1975. Conceptual Information Pro- cessing. Elsevier Publishing Company. [20] Shoham, Yoav. 1985. "Naive kinematics: Two aspects of shape." In Commonsense Summer: Final Report, Re- port No. CSLI-85-35, Center for the Study of Language and Information, Stanford University. [21] Stickel, M.E. 1982. "A nonclausal connection-graph resolution theorem-proving program." Proceedings of the AAAI-82 National Conference on Artificial Intelligence, Pittsburgh, Pennsylvania, 229-233. [22] Talmy, Leonard. 1983. "How language structures space." Spatial Orientation: Theory, Research, and Ap- plication, ed. by Herbert Pick and Linda Acredolo, Plenum Press. [23] Talmy, Leonard. 1985. "Force dynamics in lan- guage and thought." Proceedings from the Parasession on Causatives and Agentivity, 21st Regional Meeting, Chicago Linguistic Society, ed. by William H. Eilfort, Paul D. Kroeber, and Kareu L. Peterson. [24] Zahn, C. T., and R. Z. Roskies. 1972. "Fourier de- scriptors for plane closed curves." IEEE Transactions on Computers, Vol. C-21, No. 3, 269-281. March 1972. 240
1986
35
A Terminological Simplification Transformation for Natural Language Question-Answering Systems David G. Stallard BBN Laboratories Inc. I0 Moulton St. Cambridge, MA. 02238 Abstract A new method is presented for simplifying the logical expressions used to represent utterance meaning in a natural language system. 1 This simplification method utilizes the encoded knowledge and the limited inference-making capability of a taxonomic knowledge representation system to reduce the constituent structure of logical expressions. The specific application is to the problem of mapping expressions of the meaning representation language to a database language capable of retrieving actual responses. Particular account is taken of the model-theoretic aspects of this problem. 1. Introduction A common and useful strategy for constructing natural language interface systems is to divide the processing of an utterance into two major stages: the first mapping the utterance to a logical expression representing its "meaning" and the second producing from this logical expression the appropriate response. The second stage is not neccesarily trivial: the difficulty of its design is signifigantly affected by the complexity and generalness of the logical expressions it has to deal with. If this issue is not faced squarely, it may affect choices made elsewhere in the system. Indeed, a need to restrict the form of the meaning representation can be at odds with particular approaches towards producing it - as for example the "compositional" approach, which does not seek to control expression complexity by giving interpretations for whole phrasal patterns, but simply combines together the meaning of individual words in a manner appropriate to the syntax of the utterance. Such a conflict is certainly not desirable: we want to have freedom of linguistic action as well as to be able to obtain correct responses to utterances. This paper treats in detail the particular manifestation of these issues for natural-language systems which serve as interfaces to a database: the problems that arise in a module which maps the meaning representation to a second logical language for expressing actual database queries. A module performing such a mapping is a component of such question-answering systems as TEAM [4], PHLIQA1 [7] and IRUS [1]. As an example of difficulties which may be encountered, consider the question "Was the patient's mother a diabetic?" whose logical representation must be mapped onto a particular boolean field which encodes for each patient whether or not this complex property is true. Any variation on this question which a compositional semantics might also handle, such as "Was diabetes a disease the patient's mother suffered from?", would result in a semantically equivalent but very different-looking logical expression; this different expression would also have to be mapped to this field. How to deal with these and many other possible variants, without making the mapping process excessively complex, is clearly a problem. The solution which this paper presents is a new level of processing, intermediate between the other two: a novel simplification transformation which is performed on the result of semantic interpretation before the attempt is made to map it to the database. This simplification method relies on knowledge which is stored in a taxonomic knowledge representation system such as NIKL [5]. The principle behind the method is that an expression may be simplified by translating its subexpressions, where possible, into the language of NIKL, and classifying the result into the taxonomy to obtain a simpler equivalent for them. The result is to produce an equivalent but syntactically simpler expression in which fewer, but more specific, properties and relations appear. The benefit is that deductions from the expression may be more easily "read off"; in particular, the mapping becomes easier because the properties and relations appearing are more likely to be either those of the database or composable from them. The body of the paper is divided into four sections. In the first, I will summarize some past treatments of the mapping between the meaning representation and the query language, and show the problems they fail to solve. The second section prepares the way by showing how to connect the taxonomic knowledge representation system to a logical language used for meaning representation. The third section presents the "recursive terminological simplification" algorithm itself. The last section describes the implementation status and suggests directions for interesting future work. 1The work presented here woe supported under DARPA controct #N00014-85-C-0016. The views ond conclusions contoined in this document ore those of the outhor ond should not be interpreted os necessorily representing the officiol policies, either expressed or implied, of the Defense Advonced Reseorch Projects Agency or of the United $totes Government. 2. A Formal Treatment of the Mapping Problem This section discusses some previous work on the problem of mapping between the logical language used for meaning representation and the logical language in which actual database queries are expressed. The 241 difficulties which remain for these approaches will be pointed out. A common organization for a database is in terms of tables with rows and columns. The standard formulation of these ideas is found in the relational model of Codd [3], in which the tables are characterized as relations over sets of atomic data values. The elements (rows) of a relation are called "tuples", while its individual argument places (columns) are termed its "attributes". Logical languages for the construction of queries, such as Codd's relational algebra, must make reference to the relations and attributes of the database. The first issue to be faced in consideration of the mapping problem is what elements of the database to identify with the objects of discourse in the utterance - that is, with the non-logical constants 2 in the meaning representat!on. In previous work [9] I have argued that these should not be the rows of the tables, as one might first think, but rather certain sets of the atomic attribute-values themselves. I presented an algorithm which converted expressions of a predicate calculus-based meaning representation language to the query language ERL, a relational algebra [3] extended with second-order operations. The translations of non-logical constants in the meaning representation were provided by fixed and local translation rules that were simply ERL expressions for computing the total extension of the constant in the database. The expressions so derived were then combined together in an appropriate way to yield an expression for computing the response for the entire meaning representation expression. If the algorithm encountered a non-logical constant for which no translation rule existed, the translation failed and the user was informed as to why the system could not answer his question. By way of illustration, consider the following relational database, consisting of clinical history information about patients at a given hospital and of information about doctors working there: PAT I ENTS (PAT I D, SEX. AGE, D I SF-J~SE, PHYS, D I AMOTHER) DOCTORS(DOCID ,NAME,SEX.SPECIALTY) where "PHYS" is the ID of the treating physician, and "DIAMOTHER" is a boolean field indicating whether or not the patient's mother is diabetic. Here are the rules for the one-place predicate PATIENTS, the one- place predicate SPECIALTIES, and the two-place predicate TREATING-PHYSICIAN: PATIENTS => (PROJECT PATIENTS OVER PATID) SPECIALTIES => (PROJECT DOCTORS OVER SPECIALTY) TREATING-PHYSICIAN => (PROJECT (JOIN PATIENTS TO DOCTORS OVER PHYS DOCID) OVER PATID DOCID) Note that while no table exists for physician SPECIALTIES, we can nonetheless give a rule for this predicate in way that is uniform with the rule given for the predicate PATIENTS. 2This term, while a standard one in formal logic, may be confused with other uses of the word "constant". It simply refers to the function, predicate and ordinary constant symbols, such as "MOTHER" or "JOHN". whose denotations depend on the interpretation of the language, as opposed to fixed symbols like "FORALL',"AND", "TRUE". One advantage of such local translation rules is their simplicity. Another advantage is that they enable us to treat database question-answering model-theoretically. The set-theoretic structure of the model is that which would be obtained by generating from the relations of the database the much larger set of "virtual" relations that are expressible as formulas of ERL. The interpretation function of the model is just the translation function itself. Note that it is a partial function because of the fact that some non-logical constants may not have translations. We speak therefore of the database constituting a "partially specified model" for the meaning representation language. Computation of a response to a user's request, instead of being characterizable only as a procedural operation, becomes interpretation in such a model. A similar model-theoretic approach is advocated in the work on PHLIQA1 [8], in which a number of difficulties in writing local rules are identified and overcome. One class of techniques presented there allows for quite complex and general expressions to result from local rule application, to which a post- translation simplification process is applied. Other special-purpose techniques are also presented, such as the creation of "proxies" to stand in for elements of a set for which only the cardinality is known. A more difficult problem, for which these techniques do not provide a general treatment, arises when we want to get at information corresponding to a complex property whose component properties and relations are not themselves stored. For example, suppose the query "List patients whose mother was a diabetic", is represented by the meaning representation: (display t(sstof X:PATIENT (foral l Y:PERSON (->(MOTHER X Y) (DIABETIC Y))))) The information to compute the answer may be found in the field DIAMOTHER above. It is very hard to" see how we will use local rules to get to it, however, since nothing constructable from the database corresponds to the non-logical constants MOTHER and DIABETIC. The problem is that the database chooses to highlight the complex property DIAMOTHER while avoiding the cost of storing its constituent predicates MOTHER and DIABETIC - the conceptual units corresponding to the words of the utterance. One way to get around these difficulties is of course to allow for a more general kind of, transformation: a "global rule" which would match against a whole syntactic pattern like the univerally quantified sub- expression above. The disadvantage of this, as is pointed out in [8], is that the richness of both natural language and logic allows the same meaning to be expressed in many different ways, which a complete "global rule" would have to match. Strictly syntactic variation is possible: pieces of the pattern may be spread out over the expression, from which the pattern match would have to grab them. Equivalent formulations of the query may also use completely different terms. For example, the user might have employed the equivalent phrase "female parent" in place of the word "mother", presumably causing the semantic interpretation to yield a logical form with the different predicates "PARENT" and "FEMALE". This would not match the pattern. It becomes clear that the "pattern-matching" to be performed here is not the literal kind, and that it involves unspecified and arbitrary amounts of inference. The alternative approach presented by this paper 242 takes explicit account of the fact that certain properties and relations, like "DIAMOTHER", can be regarded as built up from others. In the next section we will show how the properties and relations whose extensions the database stores can be axiomatized in terms of the ones that are more basic in the application domain. This prepares the way for the simplification transformation itself, which will rely on a limited and sound form of inference to reverse the axiomatization and transform the meaning representation, where possible, to an expression that uses only these database properties and relations. In this way, the local rule paradigm can be substantially restored. 3. Knowledge Representation and Question-Answering The purpose of this section of the paper is to present a way of connecting the meaning representation language to a taxonomic knowledge representation system in such a way that the inference-making capability of the latter is available and useful for the problems this paper addresses. Our approach may be constrasted with that of others, e.g. TEAM in which such a taxonomy is used mainly for simple inheritance and attachment duties. The knowledge representation system used in this work is NIKL [5]. Since NIKL has been described rather fully in the references, I will give only a brief summary here. NIKL is a taxonomic frame-like system with two basic data structures: concepts and roles. Concepts are just classes of entities, for which roles function somewhat as attributes. At any given concept we can restrict a role to be filled by some other concept, or place a restriction on the number of individual "fillers" of the role there. A role has one concept as its "domain" and another as its "range": the role is a relation between the sets these two concepts denote. Concepts are arranged in a hierarchy of sub-concepts and superconcepts; roles are similarly arranged. Both concepts and roles may associated with names. In logical terms, a concept may be identified as the one- place predicate with its name, and a role as the two- place predicates with its name. I will now give the meaning postulates for a term- forming algebra, similar to the one described in [2] in which one can write down the sort of NIKL expressions I will need. Expressions in this language are combinable to yield a complex concept or role as their value. (CONJ CI -- CN) = (lambda (X) (and (CI X) -- (Cn X))) (VALUERESTRICT R C) _= (lambdo (X) (forall Y (-> (R X Y) (c Y))) (NUMBERRESTRICT R 1 NIL) =_. (Iombdo (X) (exists Y (R X Y))) (VRDIFF R C) = (lombdo (X Y) (and (R X Y) (C Y))) (DOMAINDIFF R C) 5 (Iombdo (X Y) (ond (R X Y) (C X))) The key feature of NIKL which we will make use of is its classifier, which computes subsumption and equivalence relations between concepts, and a limited form of this among roles. Subsumption is sound, and thus indicates entailment between terms: (SUBSUMES C1 C2) -> (forall X (-> (C2 X) (C1 X))) If the classifier algorithm is complete, the reverse is also true, and entailment indicates subsumption. Intuitively, this means " that classified concepts are pushed down as far in the hierarchy as they can go. Also associated with the NIKL system, though not a part of the core language definition, is a symbol table which associates atomic names with the roles or concepts they denote, and concepts and roles with the names denoting them. If a concept or role does not have a name, the symbol table is able to create and install one for it when demanded. The domain model In order to be able to use NIKL in the analysis of expressions in the meaning representation language, we make the following stipulations for any use of the language in a given domain. First, any one-place predicate must name a concept, and any two-place predicate name a role. Second, any constant, unless a number or a string, must name an "individual" concept - a particular kind of NIKL concept that is defined to have at most one member. N-ary functions are treated as a N+I - ary predicates. A predicate of N arguments, where N is greater than 2, is reified as a concept with N roles. This set of concepts and roles, together with the logical relationships between them, we call the "domain model". Note that all we have done is to stipulate an one-to- one correspondence between two sets of things - the concepts and roles in the domain model and the non- logical constants of the meaning representation language. If we wish to include a new non-logical constant in the language we must enter the corresponding concept or role in the domain model. Similarly, the NIKL system's creating a new concept or role, and creation of a name in the symbol table to stand for it, furnishes us with a new non-logical constant. Axiomatization of the database in terms of the domain model The translation rules presented earlier effectively seek to axiomatize the properties and relations of the domain model in terms of those of the database. This is not the only way to bridge the gap. One might also try the reverse: to axiomatize the properties and relations of the database in terms of those of the domain model. Consider the DIAMOTHER field of our sample database. We can write this in NIKL as the concept PATIENT-WITH-DIABETIC-MOTHER using terms already present in the domain model: (CONJ PATIENT (VALUERESTR ICT MOTHER DIABETIC)) If we wanted to axiomatize the relation implied by the SEX attribute of the PATIENTS table in our database, we could readily do so by defining the role PATIENT-SEX in terms of the domain model relation SEX: (DOMAIND I FF SEX PAT I ENT ) These two defined terms can actually be entered into the model, and be treated just like any others there. For example, they can now appear as predicate letters in meaning representations. Moreover, to the associated data structure we can attach a translation rule, just as we have been doing with the original domain model elements. Thus, will attach to the concept PATIENT-WITH-DIABETIC-MOTHER the rule: (PROJECT (SELECT FROM PATIENTS WHERE (EQ DIAMOTHER "YES")) OVER PATID) 243 The next section will illustrate how we map from expressions using "original" domain model elements to the ones we create for axiomatizing the database, using the NIKL system and its classifier. 4. Recursive Terminological Simplification We now present the actual simplification method. It is composed of two separate transformations which are applied one after the other. The first, the "contraction phase", seeks to contract complicated subexpressions (particularly nested quantifications) to simpler one- place predications, and to further restrict the "sorts" of remaining bound variables on the basis of the one- place predicates so found. The second part of the transformation, the "role-tightening" phase, replaces general relations in the expression with more specific relations which are lower in the NIKL hierarchy. These more specific relations are obtained from the more general by considering the sorts of the variables upon which a given relational predication is made. The contraction phase The contraction phase is an algorithm with three steps, which occur sequentially upon application to any expression of the meaning representation. First, the contraction phase applies itself recursively to each non-constant subexpression of the expression. Second, depending upon the syntactic category of the expression, one of the "pre- simplification" transformations is applied to place it in a normalized form. Third and finally, one of the actual simplification transformations is used to convert the expression to one of a simpler syntactic category. Before working through the example, I will lay out the transformations in detail. In what follows, X and X1,X2 -- Xn are variables in the meaning representation language. The symbol "<rest>" denotes a (possibly empty) sequence of formulae. The expression "(FORMULA X)" denotes a formula of the meaning representation language in which the variable X (and perhaps others) appears freely. The symbol "<quant>" is to be understood as being replaced by either the operator SETOF or the quantifier EXISTS. First, the normalization transformations, which simply re-arrange the constituents of the expressions to a more convienent form without changing its syntactic category: (I) (ond (el XI) (P2 Xl) -- (PN XI) (Q1 X2) (Q2 X2) -- (QN X2) <rest>) ~> (and (P' Xl) (Q' X2) <rest>) where P' := (CONJ P1 P2 -- PN) and Q' := (CONJ Q1 Q2 -- QN) (2) (<quant> X:S (and (P X) <rest>) => (<quant> X:S' (and <rest>)) where S' := (CONJ S P) (3) (<quant> X:S (P X)) => (<quont> X:S') where S' := (CONJ S P) (4) (forol l X:S (-> (and (P X) <rest>) (FORMULA X)) => (forall X:S' (-> (end <rest>) (FORMULA X))) In (2) and (4) above, the conjunction or implication, respectively, are collapsed out if the sequence <rest> is empty. Now the actual simplification transformations, which seek to reduce a complex sub-expression to a one- place predication. (5) (foroll X2:S (-> (R XI X2) (P X2))) i> (P' Xl) where P' := (VALUERESTRICT (VRDIFF R S) P) (S) (exists X2:S (R X1 X2)) => (P' X1) where P' := (VALUERESTRICT R S) and R must be a functional role (7) (exists X2:S (R Xl X2)) => (P' X1) where P' := (NUMBERRESTRICT (VRDIFF R S) 1 NIL) (S) (and (P X)) => (P X) (9) (R X C) => (P X) where P := (VALUERESTRICT R C) and R is functional, C an individual concept Now, let us suppose that the exercise at the end of the last section has been carried out, and that the concept PATIENT-WITH-DIABETIC-MOTHER has been created and given the appropriate translation rule. To return to the query "List patients whose mother was a diabetic", we recall that it has the meaning representation: (DISPLAY ~(SETOF X:PATIENTS (FORALL Y : PERSON (-> (MOTHER X Y) (DIABETIC Y))))) Upon application to the SETOF expression, the algorithm first applies itself to the inner FORALL. The syntactic patterns of none of the pre-simplification transformations (2) - (4) are satisfied, so transformation (5) is applied right way to produce the NIKL concept: (VALUERESTRICT (VRDIFF MOTHER PERSON) DIABETIC) This is given to the NIKL classifier, which compares it to other concepts already i n the hierarchy. Since MOTHER has PERSON as its range already, (VRDIFF MOTHER PERSON) is just MOTHER again. The classifier thus computes that the concept specified above is a subconcept of PERSON - a PERSON such that his MOTHER was a DIABETIC. If this is not found to be equivalent to any pre-existing concept, the system assigns the concept a new name which no other concept has, say PERSON-I. The outcome of the simplification of the whole FORALL is then just the much simpler expression: (PERSON-I X) The recursive simplification of the arguments to the SETOF is now completed, and the resulting expression is: (DISPLAY 't(SETOF X:PATIENT (PERSON-I X))) Transformations can now be applied to the SETOF expression itself. The pre-simplification transformation (3) is found to apply, and a concept expressed by: (CONJ PATIENT PERSON--I) is given to the classifier, which recognizes it as equivalent to the already existing concept PATIENT- WITH-DIABETIC-MOTHER. Since any concept can serve as a sort, the final simplification is: 244 (DISPLAY t(SETOF X:PATIENT-W]TH-DIABETIC~THER)) This is the very concept for which we have a rule, so the ERL translation is: (PRINT FROM (SELECT FROM PATIENT WHERE (EQ DIAMOTHER "YES")) PATID) Suppose now that the semantic interpretation system assigned a different logical expression to represent the query "List patients whose mother was a diabetic", in which the embedded quantification is existential instead of universal. This might actually be more in line with the number of the embedded noun. The meaning representation would now be: (disploy t(setof X:PATIENT (exists Y:PERSON (and (MOTHER X Y) (DIABETIC Y))) The recursive application of the algorithm proceeds as before. Now, however, the pre-simplification transformation (2) may be applied to yield: (exists Y:DIABETIC (MOTHER X Y)) since a DIABETIC is already a PERSON. Transformation (6) can be applied if MOTHER is a "functional" role - mapping each and every person to exactly one mother. This can be checked by asking the NIKL system if a number restriction has been attached at the domain of the role, PERSON, specifying that it have both a minimum and a maximum of one. If the author of the domain model has provided this reasonable and perfectly true fact about motherhood, (6) can proceed to yield: (PAT I ENT-WI TH-D I ABET IC- MOTHER X) as in the preceding example. The role tightening phase This phase is quite simple. After the contraction phase has been run on the whole expression, a number of variables have had their sorts changed to tighter ones. This transformation sweeps through an expression and changes the roles in the expression on that basis. Thus: (IS) (R X Y) => (R' X Y) where $1 is the sort of X ond $2 is the sort of Y ond R' := (DOMAINDIFF (VRDIFF R $2) Sl) One can see that a use of the relation SEX, where the sort of the first argument is known to be DOCTOR, can readily be converted to a use the relation DOCTOR-SEX. Back conversion: going in the reverse direction There will be times when the simplification transformation will "overshoot", creating and using new predicate letters which have not been seen before by classifying new data structures into the model to correspond to them. The use of such a new predicate letter can then be treated exactly as would its equivalent lambda-definition, which we can readily obtain by consulting the NIKL model. For example, a query about the sexes of leukemia victims may after simplification result in a rather strange role being created and entered into the hierarchy: PATIENT-SEX-I := (DOMAINDIFF PATIENT-SEX LEUKEMIA-PATIENT) This role is a direct descendant of PATIENT-SEX; its name is system generated. By the meaning-postulate of DOMAINDIFF given in section 3 above, it can be rewritten as the following lambda-abstract: (Iombdo (X Y) (and (PATIENT-SEX X Y) ( LEUKEM I A-PAT I ENT X) ) ) For PATIENT-SEX we of course have a translation rule as discussed in section 2. A rule for LEUKEMIA- PATIENT can be imagined as involving the DISEASE field of the PATIENTS table. At this point we can simply call the translation algorithm recursively, and it will come up with a translation: (PROJECT (SELECT FROM PATIENTS WHERE (EQ DISEASE "LEUK")) OVER PATID SEX) This supplies us with the needed rule. As a bonus, we can avoid having to recompute it later by simply attaching it to the role in the normal way. The similar computation of rules for complex concepts and roles which are already in the domain comes for free. 5. Conclusions, Implementation Status and Further Work As of this writing, we have incorporated NIKL into the implementation of our natural language question- answering system, IRUS. NIKL is used to represent the knowledge in a Navy battle-management domain. The simplification transformation described in this paper has been implemented in this combined system, and the axiomatization of the database as described above is being added to the domain model. At that point, the methodology will be tested as a solution to the difficulties now being experienced by those trying to write the translation rules for the complex database and domain of the Fleet Command Center Battle Management Program of DARPA's Strategic Computing Program. I have presented a limited inference method on predicate calculus expressions, whose intent is to place them in a canonical form that makes other inferences easier to make. Metaphorically, it can be regarded as "sinking" the expression lower in a certain logical space. The goal is to push it down to the "level" of the database predicates, or below. We cannot guarantee that we will always place the expression as low as it could possibly go - that problem is undecidable. But we can go a good distance, and this by itself is very useful for restoring the tractability of the mapping transformation and other sorts of deductive operations [10]. Somewhat similar simplifications are performed in the work on ARGON [6], but for a different purpose. There the database is assumed to be a full, rather than a partially specified, model and simplifications are performed only to gain an increase in efficiency. The distinguishing feature of the present work is its operation on an expression in a logical language for English meaning representation, rather than for restricted queries. A database, given the purposes for which it is designed, cannot constitute a full model for such a language. Thus, the terminological simplification is needed to reduce the logical expression, when possible, to an expression in a "sub-language" of the first for which the database is a full model. An important outcome of this work is the perspective it gives on knowledge representation systems like NIKL. It shows how workers in other fields, while maintaining other logical systems as their primary mode of representation, can use these systems in practical ways. Certainly NIKL and NIKL-Iike systems could 245 never be used as full meaning representations - they don't have enough expressive power, and were never meant to, This does not mean we have to disregard them, however. The right perspective is to view them as attached inference engines to perform limited tasks having to do with their specialty - the relationships between the various properties and relations that make up a subject domain in the real world. Acknowledgements First and foremost, I must thank Remko Scha, both for valuable and stimulating technical discussions as well as for patient editorial criticism. This paper has also benefited from the comments of Ralph Weischedel and Sos De Bruin. Beth Groundwater of SAIC was patient enough to use the software this work produced. I would like to thank them, and thank as well the other members of the IRUS project - Damaris Ayuso, Lance Ramshaw and Varda Shaked - for the many pleasant and productive interactions I have had with them. References [1] Bates, Madeleine and Bobrow, Robert J. A Transportable Natural Language Interface for Information Retrieval. In Proceedings of the 6th Annual International ACM SIGIR Conference. ~CM Special Interest Group on Information Retrieval and American Society for Information Science, Washington, D.C., June, 1983. [2] Brachman, R.J., Fikes, R.E., and Levesque, H.J. Krypton: A Functional Approach to Knowledge Representation. IEEE Computer, Special Issue on Knowledge Representation, October, 1983. [3] Codd,E.F. A Relational Model of Data for Large Shared Data Banks. CACM 13(6), June, 1970. [4] Barbara Grosz, Douglas E. Appelt, Paul Martin, and Fernando Pereira. TEAM: An Experiment in the Design of Transportable Natural-Language Interfaces. Technical Report 356, SRI International, Menlo Park, CA, August, 1985. [5] Moser, Margaret. An Overview of NIKL. Technical Report Section of BBN Report No. 5421, Bolt Beranek and Newman Inc., 1983. [6] Patel-Schneider, P.F., H.J. Levesque, and R.J. Brachman. ARGON: Knowledge Representation meets Information Retrieveal. In Proceedings of The First Conference on Artificial Intelligence Applications. IEEE Computer Society, Denver, Colorado, December, 1984. [7] W.J.H.J. Bronnenberg, H.C. Bunt, S.P.J. Landsbergen, RJ.H. Scha, W.J, Schoenmakers and E.P.C. van Utteren. The Question Answering System PHLIQA1. In L. Bolc (editor), Natural Language Question Answering Systems. Macmillan, 1980. [8] Scha, Remko J.H. English Words and Data Bases: How to Bridge the Gap. In 20th Annual Meeting of the Association for Computational Linguistics, Toronto. Association for Computational Linguistics, June, 1982. [9] Stallard, David G. Data Modeling for Natural Language Access. In Proceedings of the First IEEE Conference on Applied Artificial Intelligence, Denver, Colorado. IEEE, December, 1984. [10] Stallard, David G. Taxonomic Inference on Predicate Calculus Expressions. Submitted to AAAI April 1, 1986. 246
1986
36
Some Uses of Higher-Order Logic in Computational Linguistics Dale A. Miller and Gopalan Nadathur Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 - 3897 Abstract Consideration of the question of meaning in the frame- work of linguistics often requires an allusion to sets and other higher-order notions. The traditional approach to representing and reasoning about meaning in a computa- tional setting has been to use knowledge representation sys 7 tems that are either based on first-order logic or that use mechanisms whose formal justifications are to be provided after the fact. In this paper we shall consider the use of a higher-order logic for this task. We first present a ver- sion of definite clauses (positive Horn clauses) that is based on this logic. Predicate and function variables may oc- cur in such clauses and the terms in the language are the typed h-terms. Such term structures have a richness that may be exploited in representing meanings. We also de- scribe a higher-order logic programming language, called ~Prolog, which represents programs as higher-order defi- nite clauses and interprets them using a depth-first inter- preter. A virtue of this language is that it is possible to write programs in it that integrate syntactic and seman- tic analyses into one computational paradigm. This is to be contrasted with the more common practice of using two entirely different computation paradigms, such as DCGs or ATNs for parsing and frames or semantic nets for semantic processing. We illustrate such an integration in this lan- guage by considering a simple example, and we claim that its use makes the task of providing formal justifications for the computations specified much more direct. 1. Introduction The representation of meaning, and the use of such a representation to draw inferences, is an issue of central con- cern in natural language understanding systems. A theoret- ical understanding of meaning is generally based on logic, and it has been recognized that a higher-order logic is par- ticularly well suited to this task. Montague, for example, used such a logic to provide a compositional semantics for simple English sentences. In the computational framework, knowledge representation systems are given the task of rep- resenting the semantical notions that are needed in natural This work has been supported by NSF grants MCS-82- 19196-CER, MCS-82-07294, AI Center grants MCS-83- 05221, US Army Research Office grant ARO-DAA29-84- 9-0027, and DARPA N000-14-85-K-0018. language understanding programs. While the formal justi- fications that are provided for such systems is usually log- ical, the actual formalisms used are often distantly related to logic. Our approach in this paper is to represent mean- ings directly by using logical expressions, and to describe the process of inference by specifying manipulations on such expressions. As it turns out, most programming languages are poorly suited for an approach such as ours. Prolog, for instance, permits the representation and the examina- tion of the structure of first-order terms, but it is not easy to use such terms to represent first-order formulas which contain quantification. Lisp on the other hand allows the construction of lambda expressions which could encode the binding operations of quantifiers, but does not provide log- ical primitives for studying the internal structure of such expressions. A language that is based on a higher-order logic seems to be the most natural vehicle for an approach such as ours, and in the first part of this paper we shall de- scribe such a language. We shall then use this language to describe computations of a kind that is needed in a natural language understanding system. Before we embark on this task, however, we need to consider the arguments that are often made against the computational use of a higher-order logic. Indeed, several authors in the current literature on computational linguis- tics and knowledge representation have presented reasons for preferring first-order logic over higher-order logic in nat- ural language understanding systems, and amongst these the following three appear frequently. (1} GSdel showed that second-order logic is essentially in- complete, i.e. true second-order logic statements are not recursively enumerable. Hence, theorem provers for this logic cannot be, even theoretically, complete. (2) Higher-order objects like functions and predicates can themselves be considered to be first-order objects of some sort. Hence, a sorted first-order logic can be used to encode higher-order objects. (3) Little research on theorem proving in higher-order log- ics has been done. Moreover, there is reason to believe that theorem proving in such a logic is extremely dif- ficult. These facts are often used to conclude that a higher- order logic should not be used to formalize systems if such formalizations are to be computationally meaningful. While there is some truth in each of these observations, we feel that they do not warrant the conclusion that is drawn from it. We discuss our reasons for this belief below. 247 The point regarding the essential undecidability of second-order logic has actually little import on the com- putational uses of higher-order logic. This is because the second-order logic as it is construed in this observation, is not a proof system but rather a truth system of a very par- ticular kind. Roughly put, the second-order logic in ques- tion is not so much a logic as it is a branch of mathematics which is interested in properties about the integers. There are higher-order logics that have been provided which con- tain the formulas of second-order logic but which do not assume the same notion of models (i.e. the integers). These logics, in fact, have general models, including the standard, integer model, as well as other non-standard models, and with respect to this semantics, the logic has a sound and complete proof system. From a theoretical point-of-view, the second observa- tions is important. Indeed, any system which could not be encoded into first-order logic would be more powerful than Turing machines and, hence, would be a rather unsatisfac- tory computationally! The existence of such an encoding has little significance, however, with regard to the appro- priateness of one language over another for a given set of computational tasks. Clearly, all general purpose program- ming languages can be encoded onto first-order logic, but this has little significance with regard to the suitability of a given programming language for certain applications. Although less work has been done on theorem proving in higher-order logic than in first-order logic as claimed in the last point, the nature of proofs in higher-order logic is far from mysterious. For example, higher-order resolution [1] and unification [8] has been developed, and based on these principles, several theorem provers for various higher- order logics (see [2] and its references) have been built and • / tested. The experience with such systems shows that the- orem proving in such a logic is difficult. It is not clear, however, that the difficulty is inherent in the language cho- sen to express a theorem rather than in the theorem itself. In fact, expressing a higher-order theorem (as we will claim many statements about meaning are) in a higher-order logic makes its logical structure more explicit than an encoding into first-order logic does. Consequently, it is reasonable to expect that the higher-order representation should ac- tually simplify the process of finding proofs. In a more specific sense, there are sublogics of a higher-order logic in which the process of constructing proofs is not much more complicated than in similar sublogics of first-order logic. An example of such a case is the higher-order version of definite clauses that we shall consider shortly. In this paper, we present a higher-order version of def- inite clauses that may be used to specify computations, and we describe a logic programming language, ,~Prolog, that is based on this specification language. We claim that ~Prolog has several linguistically meaningful applications. To bolster this claim we shall show how the syntactic and semantic processing used within a simple parser of natu- ral language can be smoothly integrated into one logical and computational process. We shall first present a defi- nite clause grammar that analyses the syntactic structure of simple English sentences to produce logical forms in much the same way as is done in the Montague framework. We shall then show how semantic analyses may be specified via operations on such logical forms. Finally, we shall illus- trate interactions between these two kinds of analyses by considering an example of determining pronoun reference. 2. Higher-Order Logic The higher-order logic we study here, called T, can be thought of as being a subsystem of either Church's Simple Theory of Types [5] or of Montague's intensional logic IL [6]• Unlike Church's or Montague's logics, T is very weak because it assumes no axioms regarding extensionality, def- inite descriptions, infinity, choice, or possible worlds. T encompasses only the most primitive logical notions, and generalizes first-order logic by introducing stronger notions of variables and substitutions. Our use of T is not driven by a desire to capture of the meaning of linguistic objects, as was the hope of Montague. It is our hope that programs written in T will do that. The language of T is a typed language. The typing mechanism provides for the usual notion of sorts often used in first-order logic and also for the notion of functional types. We take as primitive types (i.e. sorts) o for booleans and i for (first-order) individuals, adding others as needed. Functional types are written as a -* fl, where o~ and fl are types. This type is intended to denote the type of func- tions whose domains are a and whose codomains are /3. For example, i --~ i denotes the type of functions which map individuals to individuals, and (i --* i) --* o denotes the type of functions from that domain to the booleans. In reading such expressions we use the convention that --* is right associative, i.e. we read a --* fl --~ -y as ol --~ (fl --~ -~). The terms or formulas of T are specified along with their respective types by the following simple rules: We start with denumerable sets of constants and variables at each type. A constant or variable in any of these sets is considered to be a formula of the corresponding type. Then, if A is of type a --* fl and B is of type a, the function application (AB) is a formula of type ft. Finally, if x is a variable of type a and C is a term of type fl, the function abstraction )~xC is a formula of type a -~ ft. We assume that the following symbols, called the log- ical constants, are included in the set of constants of the corresponding type: true of type o, ~ of type o --* o, A, V, and D each of type o --~ o --~ o and II and ~ of type (A --~ o) --~ o for each type A. All these symbols except the last two correspond to the normal propositional connec- tives. The symbols II and Y:, are used in conjunction with the abstraction operation to represent universal and exis- tential quantification: Vx P is an abbreviation for H(Ax P) and 3x P is an abbreviation for G(Ax P). H and E are examples of what are often called generalized quantifiers. The type o has a special role in this language. A for- mula with a function type of the form tt --* ... --~ t~ --~ o is called a predicate of n arguments. The i th argument of such a predicate is of type ti. Predicates are to be thought of as representing sets and relations. Thus a predicate of type f --* o represents a set of individuals, a predicate of type (i --~ o) --~ o represents a set of sets of individuals, 248 and a predicate of type i --~ (i --* o) ~ o represents a bi- nary relation between individuals and sets of individuals. Formulas of type o are called propositions. Although pred- icates are essentially functions, we shall generally use the term function to denote a formula that does not have the type of a predicate. Derivability in T, denoted by ~-T, is defined in the fol- lowing (simplified) fashion. The axioms of T are the propo- sitional tautologies, the formula Vx Bx D Bt, and the for- mula Vx (PxAQ) D Vx PxAQ. The rules of inference of the system are Modus Ponens, Universal Generalization, Sub- stitution, and A-conversion. The rules of A-conversion that we assume here are a-conversion (change of bound vari- ables), fl-conversion (contraction), and r/-conversion (re- place A with Az(Az) and vice versa if A has type a --* fl, z has type a, and z is not free in A). A-conversion is essen- tially the only rule in T that is not in first-order logic, but combined with the richer syntax of formulas in T it makes more complex inferences possible. In general, we shall consider two terms to be equal if they are each convertible to the other; further distinctions can be made between formulas in this sense by omitting the rule for rl-conversion, but we feel that such distinctions are not important in our context. We say that a formula is a A-normal formula if it has the form Axi...Ax, (h tl ... tin) wheren, m>0, where h is a constant or variable, (h tl ... t,,) has a prim- itive type, and, for 1 < i < m, t~ also has the same form. We call the list of variables xl,...,x,~ the binder, h the head, and the formulas tl,...,tm the arguments of such a formula. It is well known that every formula, A, can be converted to a A-normal formula that is unique up to a- conversions. We call such a formula a A-normal form of A and we use Anorrn(A) to denote any of these alphabetic variants. Notice that a proposition in A-normal form must have an empty binder and contai9 either a constant or free variable as its head. A proposition in A-normal form which has a non-logical constant as its head is called atomic. Our purpose in this paper is not merely to use a logic as a representational device, but also to think of it as a device for specifying computations. It turns out that T is too complex for the latter purpose. We shall therefore restrict our attention to what may be thought of as a higher-order analogue of positive Horn clauses. We define these below. We shall henceforth assume that we have a fixed set of nonlogical constants. The positive Herbrand Universe is identified in this context to be the set of all the A-normal formulas that can be constructed via function application and abstraction using the nonlogical constants and the log- ical constants true, A, V and ~; the omission here is of the symbols ~, D, and II. We shall use the symbol )4+ to denote this set of terms. Propositions in this set are of special inter- est to us. Let G and A be propositions in ~/+ such that A is atomic. A (higher-order) definite clause then is the univer- sal closure of a formula of the form G D A, i.e. the formula Ve (G D A) where • is an arbitrary listing of all the free variables in G and A, some of which may be function and predicate variables. These formulas are our generalization of positive Horn clauses for first-order logic. The formula on the left of the D in a higher-order definite clause may contain nested disjunctions and existential quantification. This generalization may be dispensed within the first-order case because of the existence of appropriate normal forms. For the higher-order case, it is more natural to retain the embedded disjunctions and existential quantifications since substitutions for predicate variables have the potential for re-introducing them. Illustrations of this aspect appear in Section 4. Deductions from higher-order definite clauses are very similar to deductions from positive Horn clauses in first- order logic. Substitution, unification, and backchaining can be combined to build a theorem prover in either case. How- ever, unification in the higher-order setting is complicated by the presence of A-conversion: two terms t and 8 are unifi- able if there exists some substitution ~ such that Us and ~t are equal modulo A-conversions. Since fl-conversion is a very complex process, determining this kind of equality is difficult. The unification of typed A-terms is, in general, not decidable, and when unifiers do exist, there need not exist a single most general unifier. Nevertheless, it is pos- sible to systematically search for unifiers in this setting [8] and an interpreter for higher-order definite clauses can be built around this procedure. The resulting interpreter can be made to resemble Prolog except that it must account for the extra degree of nondeterminism which arises from higher-order unification. Although there are several impor- tant issues regarding the search for higher-order unifiers, we shall ignore them here since all the unification problems which arise in this paper can be solved by even a simple- minded implementation of the procedure described in [8]. 3. AProlog We have used higher-order definite clauses and a depth-first interpreter to describe a logic programming lan- guage called AProlog. We present below a brief exposition of the higher-order features of this language that we shall use in the examples in the later sections. A fuller descrip- tion of the language and of the logical considerations un- derlying it may be found in [9]. Programs in AProlog are essentially higher-order defi- nite clauses. The following set of clauses that define certain standard list operations serve to illustrate some of the syn- tactic features of our language. append nil K K. append (cons X L) K (cons X M) :- append L K M. member X (cons X L). member X (cons Y L) :- member X L. As should be apparent from these clauses, the syntax of AProlog borrows a great deal from that of Prolog. Sym- bols that begin with capital letters represent variables. All other symbols represent constants. Clauses are written backwards and the symbol :- is used for C. There are, however, some differences. We have adopted a curried no- tation for terms, rather than the notation normally used in a first-order language. Since the language is a typed one, types must be associated with each term. This is done by 249 either explicitly defining the type of a constant or a vari- able, or by inferring such a type by a process very similar to that used in the language ML [7]. The type expressions that are attached to symbols may contain variables which provide a form of polymorphism. As an example cons and nil above are assumed to have the types A -> (list A) -> (list A) and (list A) respectively; they serve to de- fine lists of different kinds, but each list being such that all its elements have a common type. (For the convenience of expression, we shall actually use Prolog's notation for lists in the remainder of this paper, i.e. we shall write (cons X L) as [XIL]). In the examples in this paper, we shall occa- sionally provide type associations, but in general we shall assume that the reader can infer them from context when it is important. We need to represent A-abstraction in our language, and we use the symbol \ for this purpose; i.e. AX A is written in AProlog as X \ A. The following program, which defines the operation of mapping a function over a list, illustrates a use of function variables in our language. mapfun F [XIL] [(F X)IK] :- mapfun F L K. mapfun F [] []. Given these clauses, (mapfun F L1 L2) is provable only if L2 is a list that results from applying F to each element of L1. The interpreter for AProlog would therefore evaluate the goal (mapfun (X\(g X X)) [a. b]) L) by returning the value [(g a a). (g b b)] for L. The logical considerations underlying the language permit functions to be treated as first-class, logic program- ming variables. In other words, the values of such variables can be computed through unification. For example, con- sider the query (mapfun F[a. b] [(g a a), (g a b)]). There is exactly one substitution for F, namely X\(g a X), that makes the above query provable. In searching for such higher-order substitutions, the interpreter for AProlog would need to backtrack over choices of substitutions. For example, if the interpreter attempted to prove the above goal by attempting to unify (F a) with (g a a), it would need to consider the following four possible substitutions for F: X\(g X X) Xk(g a X) X\(g X a) X\(g a a). If it chooses any of these other than the second, the inter- preter would fail in unifying (F b) with (g a b), and would therefore have to backtrack over that choice. It is important to notice that the set of functions that are representable using the typed A-terms of AProlog is not the set of all computable functions. The set of functions that are so representable are in fact much weaker than those representable in, for example, a functional programming language like Lisp. Consider the goal (mapfun F [a. b] [c, d]). There is clearly a Lisp function which maps a to c and b to d, namely, (lambda (x) (if (eq x 'a) 'b (if (eq x 'c) 'd 'e))) Such a function is, however, not representable using our typed A-terms since these donot contain any constants rep- resenting conditionals {or fixed point operators needed for recursive definitions). It is actually this restriction to our term structures that makes the determination of function values through unification a reasonable computational op- eration. The provision of function variables and higher-order unification has several uses, some of which we shall exam- ine in later sections. Before doing that we consider briefly certain kinds of function terms that have a special status in the logic programming context, namely predicate terms. 4. Predicates as Values From a logical point of view, predicates are not much different from other functions; essentially they are func- tions that have a type of the form ai --~ ... --* ~ --~ o. In a logic programming language, however, variables of this type may play a different and more interesting role than non-predicate variables. This is because such variables may appear inside the terms of a goal as well as the head of a goal. In a sense, they can be used intensionally and exten- sionally (or nominally and saturated). When they appear intensionally, predicates can be determined through unifi- cation just as functions. When they appear extensionally, they are essentially "executed." An example of these mixed uses of predicate variables is provided by the following set of clauses; the logical con- nectives A and V are represented in AProlog by the symbols • and ;, true is represented by true and Z is represented by the symbol sigma that has the polymorphic type (A -> O) -> O. sublist P [XIL] [XlK] :- P X. sublist P L Z. sublist P [XIL] K :- sublist P L K. sublist P [] []. have_age L K :- sublist Z\(sigma Xk(ags Z X)) L K. name_age L K :- sublist Z\(age Z A) L K. age bob 9.3. age sue 24. age ned 23. The first three clauses define the predicate sublist whose first argument is a predicate and is such that (sublist P L K) is provable if K is some sublist of L and all the mem- bers in K satisfy the property expressed by the predicate P. The fourth clause uses sublist to define the predicate have_age which is such that (have_age L K) is provable if K is a sublist of the objects in L which have an age. In the definition of have_age a predicate term that contains an explicit quantifier is used to instantiate the predicate argument of sublist; the predicate (Z\ (sigma X\ (age Z X))), which may be written in logic as Az 3z age(z,z), is true of an individual if that individual has an age. This predicate term needs to be executed in the course of eval- uating, for example, the query (have_age [bob. sue ,ned] K). The predicate name_age whose definition is obtained by dropping the quantifier from the predicate term defines a different property; (same_age L K) is true only when the objects in K have the same age. 250 Another example is provided by the following set of clauses that define the operation of mapping a predicate over a list. mappred P [X[L] [Y[K] :- P X Y. mappred P L K. mappred P [] []. This set of clauses may be used, for example, to evaluate the following query: mappred (X\Y\(age Y X)) [23.24] L. This query essentially asks for a list of two people, the first of which is 23 years old while the second is 24 years old. Given the clauses that appear in the previous example, this query has two different answers: [bob. sue] and [ned. sue]. Clearly the mapping operation defined here is much stronger than a similar operation considered earlier, namely that of,napping a function over a list. In evaluating a query that uses this set of clauses a new goal, i.e. (P X Y), is formed whose evaluation may require arbitrary computa- tions to be performed. As opposed to this, in the earlier case only A-reductions are performed. Thus, mappred is more like the mapping operations found in Lisp than map- fun is. In the cases considered above, predicate variables that appeared as the heads of goals were fully iustantiated be- fore the goal was invoked. This kind of use of predicate variables is similar to the use of apply and lambda terms in Lisp: A-contraction followed by the goal invocation sim- ulates the apply operation in the Prolog context. However, the variable head of a goal may not always be fully instanti- ated when the goal has to be evaluated. In such cases there is a question as to what substitutions should be attempted. Consider, for example, the query (P bob 23). One value that may be returned for P is XkY\ (age X Y), and this may seem to be the most "natural" value. There are, however, many more substitutions for P which also satisfy this goal: XkY\(X = bob, Y = 23), XkY\(Y = 23), XkY\(age sue 24), etc. are all terms that could be picked, since if they were substituted for P in the query they would result in a provable goal. There are, clearly, too many substitutions to pick from and perhaps backtrack over. Furthermore several of these may have little to do with the original intention of the query. A better strategy may be to pick the one sub- stitution that has the largest "extension" in such cases; in the case considered here, such a substitution for P would be the term XkY\true. It is possible to make such a choice without adding to the incompleteness of an interpreter. Picking such a substitution does not necessarily triv- ialize the use of predicate variables. If a predicate occurs intensionally as well as extensionally in a goal, this kind of a trivial substitution may not be possible. To illustrate this let us consider the following set of clauses: primrel father, primrel mother. primrel wife. primrel husband. tel R :- primrel R. rel XkYk(sigma Zk(R X Z, S Z Y)) :- prlmrel R. prlmrel S. The first four clauses identify four primitive relations be- tween individuals (primrel has type (i -> i -> o) -> o). These are then used to define other relations that are a re- sult of "joining" primitive relations. Now if (mother Jane mary) and (wife john jane) are provided as additional clauses, then the query (rel R. R john mary) would yield the substitution X\Y\(sigma Zk(wife X Z. mother Z Y)) for R. This query asks for a relation (in the sense of tel) between john and mary. The answer substitution provides the relation mother-in-law. We have been able to show (Theorem 1 [9]) that any proof in T of a goal formula from a set of definite clauses which uses a predicate term containing the logical connec- tives ~, D, or V, can be converted into another proof in which only predicate terms from ~/+ are used. Thus, it is not possible for a term such as Ax (person(x) ^ Vy (child(x,y) D doctor(y))) to be specified by a AProlog program, i.e. be the unique substitution which makes some goal provable from some set of definite clauses. This is because a consequence of our theorem is that if this term is an answer substitution then there is also another A-term that does not use im- plications or universal quantification that can be used to satisy the given goal. If an understanding of a richer set of predicate constructions is desired, then one course is to leave definite clause logic for a stronger logic. An alterna- tive approach, which we use in Section 6, is to represent predicates as function terms whose types do not involve o. This, of course, means that such predicate constructions could not be the head of goals. Hence, additional definite clauses would be needed to interpret the meaning of these encoded predicates. 5. A Simple Parsing Example The enriched term structure of AProlog provides two facilities that are useful in certain contexts. The notion of A-abstraction allows the representation of binding a vari- able over a certain expression, and the notion of appli- cation together with A-contraction captures the idea of substitution. A situation where this might be useful is in representing expressions in first-order logic as terms, and in describing logical manipulations on them. Con- sider, for example, the task of representing the formula VxBy(P(x,y) D Q(y,x)) as a term. Fragments of this for- mula may be encoded into first-order terms, but there is a genuine problem with representing the quantification. We need to represent the variable being quantified as a gen- uine variable, since, for instance, instantiating the quanti- fier involves substituting for the variable. At the same time we desire to distinguish between occurences of a variable within the scope of the quantifier from occurences outside of it. The mechanism of A-abstraction provides the tool needed to make such distinctions. To illustrate this let us consider how the formula above may be encoded as a A- term. Let the primitive type b be the type of terms that represent first-order formulas. Further let us assume we have the constants & and => of type b -> b -> b, and all 251 and some of type (i -> b) -> b. These latter two constants have the type of generalized quantifiers and are in fact used to represent quantifiers. The A-term (all X\ (some Y\ (p X Y => q Y X) ) ) may be used to represent the above formula. The type b should be thought of as a term-level encoding of the boolean type o. A more complete illustration of the facilities alluded to above may be provided by considering the task of translat- ing simple English sentences into logical forms. As an ex- ample, consider translating the sentence "Every man loves a woman" to the logical form Vx(man(x) D qy(woman(y) A loves(x, y))) which in our context will be represented by the A-term (all X\(man X => (some Y\(woman Y ~ loves X Y)))) A higher-order version of a DCG [10] for performing this task is provided below. This DCG draws on the spirit of Montague Grammars. (See [11] for a similar example.) sentence (P1 P2) np (P1 P2) lapP nom P nom X\(P1 X & P2 X) vp X\(P2 (P1 X)) vp P relcl P --> np P1, vp P2, [.]. --> determ Pl, hem P2. --> propernoun P. --> noun P. --> noun Pl, relcl~2. --> transverb Pl, np P2. --> intransverb P. --> [that], vp P. determ Pl\P2\(all X\(P1X => P2 X)) --> [every]. determ PlkP2k(P2 (iota P1)) --> [the]. determ Pl\P2\(some xk(PI X & P2 X)) --> [a]. noun man noun woman propernoun john propernoun mary transverb loves transverb likes --> [man]. --> [woman]. --> [john]. --> [mary]. --> [loves]. --> [likes]. intransverb lives --> [lives]. We use above the type token for English words; the DCG translates a list of such tokens to a term of some corre- sponding type. In the last few clauses certain constants are used in an overloaded manner. Thus the constant man cor- responds to two distinct constants, one of type token and another of type i -> b. We have also used the symbol iota that has type (i -> b) -> i. This constant plays the role of a definite description operator; it picks out an individual given a description of a set of individuals. Thus, parsing the sentence "The woman that loves john likes mary" produces the term (likes (iota Xk(woman X ~ loves X john)) mary), the intended meaning of which is the predication of the relationship of liking between an object that is picked out by the description X\(woman X & loves X john)) and mary. Using this DCG to parse a sentence illustrates the role that abstraction and application play in realizing the no- tion of substitution. It is interesting to compare this DCG with the one in Prolog that is presented in [10]. The first thing to note is that the two will parse a sentence in nearly identical fashions. In the first-order version, however, there is a need to explicitly encode the process of substitution, and considerable ingenuity must be exercised in devising grammar rules that take care of this process. In contrast in ),Prolog the process of substitution and the process of parsing are handled by two distinct mechanisms, and con- sequently the resulting DCG is more perspicuous and so also easier to extend. The DCG presented above may also be used to solve the inverse problem, namely that of obtaining a sentence given a logical form, and this illustrates the use of higher- order unification. Consider the task of obtaining a sentence from the logical form (all X\(man X => (some Y\(woman Y ~ loves X Y))) ). This involves unifying the above form with the expression (P1 P2). One of the unifiers for this is Pl --> Pk(all X\(man X => P X)) P2 --> X\(some Y\(woman Y ~ loves X Y). Once this unifier is picked, the task then breaks into that of obtaining a noun phrase from Pk(all Xk(man X => P X)) and a verb phrase from X\ (some Y\ (woman Y ~ loves X Y). The use of higher-order unification thus seems to provide a top-down decomposition in the search for a solution. This view turns out to be a little simplistic however, since uni- fication permits more structural decompositions than are warranted in this context. Thus, another unifier for the pair considered above is PI --> Zk(all Z) P2 --> X\(man X => (some Y\(woman Y & loves X Y))) which does not correspond to a meaningful decomposition in the context of the rest of the rules. It is possible to prevent such decompositions by anticipating the rest of the grammar rules. Alternatively decompositions may be eschewed altogether; a logical form may be constructed bottom-up and compared with the given one. The first alternative detracts from the clarity, or the specificational nature, of the solution. The latter involves an exhaustive search over the space of all sentences. The DCG consid- ered here, together with higher-order unification, seems to provide a balance between clarity and efficiency. The final point to be noted is that the terms that are produced at intermediate stages in the parsing process are logically meaningful terms, and computations on such terms may be encoded in other clauses in our language. In Section 7, we show how some of these terms can be directly interpreted as frame-like objects. 6. Knowledge Representation We now consider the question of how a higher-order logic might be used for the task of representing knowledge. Traditionally, certain network based formalisms, such as KL-ONE [4], have been described for this purpose. Such formalisms use nodes and arcs in a network to encode 252 knowledge, and provide algorithms that operate on this network in order to perform inferences on the knowledge so represented. The nature of the information represented in the network may be clarified with reference to a logic, and the correctness of the algorithms is often proved by showing that they perform certain kinds of logical infer- ence on the underlying information. Our approach here is to encode the relevant notions by using A-terms that di- rectly correspond to their logical nature, and to use definite clauses to specify logical inferences on these notions. We demonstrate this approach below through a few examples. A key notion in knowledge representation is that of a concept. KL-ONE provides the ability to define primitive roles and concepts and a mechanism to put these together to define more complex concepts. The intended interpre- tation of a role is a two place relation, and of a concept is a set of objects characterized by some defining property. An appropriate logical view of a concept, therefore, is to identify it with a one-place predicate. A particularly apt way of modeling the connection between a concept and a predicate is to use A-terms of a certain kind to denote con- cepts. The following set of clauses that are used to define concepts modelled after examples in [4] serves to make this clear. prim_role recipient. prim_role sender. primrole supervisor. prim_concept person. prim_concept crew. prim_concept commander. prim_concept message. prim_concept important message. role R :- prim_role R. concept C :- prim_concept C. concept (X\(CI X & C2 X)) :- concept CI, concept C2.. concept (X\(all Y\(R X Y => C1 Y))) :- concept CI, role R. The type of prim_role and role in the above example is (i -> i -> b) -> o and of prim_concept and concept is (i -> b) -> o. Any term that can be substituted for R so as to make (role R) provable from these clauses is con- sidered a role. Similarly, any term that can be substituted for C so as to make (concept C) provable is considered a concept. The first three clauses serve to define primitive roles in this sense, and the next five clauses define primitive concepts. The remaining clauses describe a mechanism for constructing further roles and concepts. As can be readily seen, all roles are primitive roles. An example of a complex concept is provided by the term (X\(message X a (all Y\(sender X Y => crew Y)))) which may he described by the noun phrase "messages all of whose senders are crew members." One of the purposes for providing a representation for concepts is so that inferences that involve them can be de- scribed. One kind of inference that is of particular inter- est is that of determining subsumption. A concept C1 is said to subsume another concept C2 if every element of the set described by C2 is a member of the set described by C,. Given our representation of concepts, the question of whether C1 subsumes 6'2 reduces to the question of whether Vx(C2(x) D Cl(x)) is valid (i.e. provable). Such an infer- ence may be based either on certain primitive containment relations, or on an analysis of the structure of the terms used to denote concepts. The following set of clauses make these ideas precise: subsume person crew. subsume (X\(all Y\(sender X Y => person Y))) message. subsume (X\(all Y\(recipient X Y => crew Y))) message. subsume message important_message. subsume (X\(all Y\(sender X Y => commander Y))) important_message. subsume C C. subsume A B :- subsume A C, subsume C B. subsume (Z\(A Z & B Z)) C :- subsume A C. subsume B C. subsume A (Z\(B Z & C Z)) :- subsume A B. subsume A (Z\(B Z & C Z)) :- subsume A C. subsume (Z\(all (Y\(R Z Y => A Y)))) (Z\(all (Y\(R Z Y => B Y)))) :- subsume A B. The first few clauses specify certain primitive containment relations; thus the first clause states that the set described by crew is contained in the set described by person. The later clauses specify subsumption relations based on these primitive ones and on the logical structure of the terms describing the concepts. One of the virtues of our rep- resentation now becomes clear: It is easy to see that the above set of clauses correctly specifies the relation of sub- sumption. If a and B are two terms that represent concepts, then rather elementary proof-theoretic arguments may be employed to show that (subsumes A B) is provable from the above clauses if and only if the first-order term (all X\ (B X => A X)) is logically entailed by the primitive sub- sumption relations. Furthermore, any sound and complete interpreter for AProlog (such as one searching breath-first) may be used together with these clauses to provide a sound and complete subsumption algorithm. Another kind of inference that is often of interest is that of determining whether an object a is in the set of objects denoted by a concept C. This question reduces to whether (C a) is a theorem. This inference may be encoded in definite clauses in the manner illustrated below: fact (important_message ml). fact (sender ml kirk). fact (recipient ml scotty). interp A :- fact A. 253 interp (A & B) :- interp A, interp B. interp (C U) :- subsume (X\(all Y\ (R X Y => C Y))) D. fact (R V U). interp (D V). interp (C U) :- subsume C D. interp (D U). In the clauses above, fact and interp are predicates of type b -> o. The first few clauses state which formulas of type b should be considered true; (fact X) may be read as an assertion that X is true. The last few clauses define interp to be a theorem-prover that uses subsume and fact to deduce additional formulas of type b. The only clause that may need to be explained here is the third one pertaining to interp. This clause may be explained as follows. Let (D V) and (subsume (X\(all Y\ (R X Y => C Y) )) D) be true. By virtue of the meaning of subsumption, ((Xk(all Y\ (R X Y => C Y))) V),i.e. (all Y\ (R V Y => C Y)), is true. From this it follows that for any U if (R V U) is true then so is (C U). Given the clauses in this section, some of the inferences that are possible are the following: kirk is a person and a commander, and scotty is a crew and a person. That is, (interp (person kirk) ), for example, is provable from these definite clauses. 7. Syntax and Semantics in Parsing In Section 5, we showed how sentences and phrases could be translated into logical forms that correspond to their meaning. Such logical forms are well defined objects in our language and in Section 6 we illustrated the possibil- ity of defining logical inferences on such objects. There are parsing problems which require semantical analysis as well as syntactic analysis and our language provides the ability to combine such analyses in one computational framework. A common approach in natural language understanding systems is to use one computational paradigm for syntactic analysis (e.g. DCGs, ATNs) and another one for seman- tic analysis (e.g. frames, semantic nets). An integration of these two paradigms is often difficult to explain in a for- mal sense. Using the approach that we suggest here also results in the syntactic and semantic processing being done at two different levels: one is first-order and the other is higher-order. Bridging these two levels, however, can be very natural. For example, the query (see Section 4) rel R. R john mary mixes both aspects. The process of determining a suitable instantiation for R is second-order, while the process of de- termining whether or not (R john mary) is provable is first-order. The problem of determining referents for pronouns provides a example where such an intermixing of levels is necessary, since possible referents for a pronoun must be checked for membership in the male or female concepts. For example, consider the following sentences: "John likes Mary. She loves him." The problem here is that of identify- ing "she" with Mary and "him" with John. This processing could be done in the following fashion: First, a DCG similar to the one in Section 5 could be written which returns not only the logical form corresponding to a sentence but also a list of possible referents for pronouns that occur later. In this example, the list of proper nouns [john. mary] would be returned. When pronouns are encountered, the DCG would substitute some male or female elements from this list, depending on the gender of the pronoun. The process of selecting an appropriate referent may be accomplished with the following clauses: prim_concept male. prim_concept female. fact (female mary). fact (male john). select G X [XIL] :- interp (G X). select G X [YIL] :- select X L G. A call to the goal (select female X [john, mary] ) would result in picking mary as a female from the set of proper nouns. This is, of course, a very simple example. This framework, however, supports the following extension. Let sentences contain definite descriptions. Consider the following sentences: "The uncle whose children are all doctors likes Mary. She loves him." Here, "him" clearly refers to the uncle whose children are all doctors. In order to modify our above program we need to make only a few additions. First, we need to be able to take a concept, such as "uncle whose children are all doctors" and encode the (unique) individual within it. To do this, we use the definite description operator described in Section 5. Hence, after parsing the first sentence, the list [(iota (X\(uncle X (all Y\ (child X Y => doctor Y)) ))) . mary] would be returned as the list of possible pronoun references. Consider the following additional definite clauses. prim_concept man. prim_concept uncle. prim_concept doctor. prim_relation child. subsume male man. subsume man uncle. interp (P (iota Q)) :- subsume P Q. The first six clauses give properties to some of the lexical items in this sentence. Only the last clause is an addition to our actual program. This clause, however, is very im- portant since it is one of those simple and elegant ways in which the different logical levels can be related. A term of the form (iota Q) represents a first-order individual (i.e. some object), but it does so by carrying with it a de- scription of that object (the concept Q). This description can be invoked by the following inference: the Q is a P if all qs are Ps. Hence, checking membership in a concept is transformed into a check for subsumption. To find a referent for "him" in our example sentences, the goal (select male X [(iota (X\(uncle X & (all Y\(child X Y => doctor Y))))). mary] ) 254 would be used to pick the male from the list of possible pronoun references. (Notice here that X occurs both free and bound in this query.) In attempting to satisfy this goal, the goal (Interp (male (iota (X\(uncle X k (all Y\(child X Y => doctor Y))))))) and then the goal (subsume male (X\(uncle X a (all Y\(child X Y => doctor Y))))) would be attempted. This last goal is clearly satisfied pro- viding a suitable referent for the pronoun "him." 8. Compiling into First-Order Logic We have suggested that higher-order logic can be used to provide a formal specification and justification of certain computations involving meanings and parsing. We have been concerned with explaining a logic programming ap- proach to integrating syntactic and semantic processing. Higher-order logic is, of course, not needed to perform such computations. In fact, once we have specified algorithms in this higher-order setting, it is occasionally the case that a first-order re-implementation is possible. For example, all the specifications in Section 6 can be transformed or "com- piled" into first-order definite clauses. One way of perform- ing such a compilation is to define the following constants to be the corresponding A-terms: and C\D\X\(C X & D X) restr RkC\X\(all Y\(R X Y => C Y)) Using these definitions, the clauses for role, concept, and subsume may be rewritten as the following: role R :- prim_role R. concept C :- prlm_concept C. concept (and CI C2) :- concept C1, concept C2. concept (restr R CI) :- concept Cl, role R. subsume C C. subsume A B :- subsume A C. subsume C B. subsume (and A B) C :- subsume A C. subsume B C. subsume A (and B C) :- subsume A B. subsume A (and B C) :- subsume A C. subsume (restr R A) (restr R B) :- subsume A B. Introducing the notion of an element of a concept is less straightforward. In order to do this, we need to first differ- entiate between a fact that states membership in a concept and a fact that states a relationship between two elements. We do this by making the following additional definitions: is_a C\X\(fact (C X)) related R\X\Y\(fact (R X Y)) If we assume that interp is only used to decide membership in concepts, then we may replace (interp (C X)) by (is a C X). The remaining clauses in Section 6 can be translated into the following: is_a important_message ml. related sender ml kirk. related recipient ml scotty. is a (and A B) X :- is_a A X. is_a B X. is_a C U :- subsume (restr R C) D. related R V U. is_a D V. is_a C U :- subsume C D, is_a D U. The resulting first-order program is isomorphic to the orig- inal, higher-order program. The subsumption algorithm in [3] is essentially the one specified by the clauses that define subsume. There are two important points to make regard- ing this program, however. First, to correctly specify its meaning, one needs to develop the machinery of the higher- order program which we first presented. Second, this lat- ter program represents a compilation of the first program. This compilation relys on simplifing the representation of concepts and roles to a point where their logical structure is no longer apparent. As a result, it would be harder to extend this program with new forms of concepts, roles and inferences that involves them. The original program, how- ever, is easy to extend. Another way to see this comparison is to say that the higher-order program is the formal semantics of the first- order program. This way of looking at semantics is very similar to the denotational approach to specifying program language semantics. There, the correct understanding of very simple, low level programming features might involve constructions which are higher-order and functional in na- ture. 9. Conclusions Our goal in this paper was to argue that higher-order logic has a meaningful role to play in computational lin- guistics. Towards this end, we have described a version of definite clauses based on higher-order logic and presented several examples that illustrate their possible use in a nat- ural language understanding system. We have built an ex- perimental, depth-first interpreter for AProlog on which we have tested all the programs that appear in this paper (and many others). We are currently working on the design and implemention of an efficient interpreter for this program- ming language. 255 References [1] Peter B. Andrews, "Resolution in Type Theory," Jour- nal of Symbolic Logic 36 (1971), 414 - 432. [21 Peter B. Andrews, Dale A. Miller, Eve Longini Cohen, Frank Pfenning, "Automating Higher-Order Logic" in Automated Theorem Proving: After '25 Years, AMS Contemporary Mathematics Series 29 (1984). [3] Ronald J. Brachman, Hector J. Levesque, "The Trac- tability of Subsumption in Frame-based Description Languages" in the Proceedings of the National Con- ference on Artificial Intelligence, AAAI 1984, 34 - 37. [4] Ronald J. Brachman, James G. Schmolze, "An Over- view of the KL-ONE Knowledge Representation Sys- tem," Cognitive Science 9 (1985), 171 - 216. [5] Alonzo Church, "A Formulation of the Simple Theory of Types," Journal of Symbolic Logic 5 (1940), 56 - 68. [6] David R. Dowty, Robert E. Wall, Stanley Peters, Intro- duction to Montague Semantics, D. Reidel Publishing Co., 1981. [7] Michael J. Gordon, Arthur J. Milner, Christopher P. Wadsworth, Edinburgh LCF, Springer-Verlag Lecture Notes in Computer Science No. 78, 1979. [8] Gdrard P. Huet, "A Unification Algorithm for Typed A-calculus," Theoretical Computer Science 1 (1975), 27 - 57. [9] Dale A. Miller, Gopalan Nadathur, "Higher-order Logic Programming," in the Proceedings of the Third International Logic Programming Conference, Impe- rial College, London England, July 1986. [10] F. C. N. Pereira, D. H. D. Warren, "Definite Clause Grammars for Language Analysis - A Survey of the Formalism and a Comparison with Augmented Tran- sition Networks" in Artificial Intelligence 13 (1980). [11] David Scott Warren, "Using A-Calculus to Represent Meaning in Logic Grammars" in the Proceedings of the 21st Annual Meeting of the Association for Com- putational Linguistics, June 1983, 51 - 56. 256
1986
37
A LOGICAL SEMANTICS FOR FEATURE STRUCTURES Robert T. Kasper and William C. Rounds Electrical Engineering and Computer Science Department University of Michigan Ann Arbor, Michigan 48109 Abstract Unification-based grammar formalisms use struc- tures containing sets of features to describe lin- guistic objects. Although computational algo- rithms for unification of feature structures have been worked out in experimental research, these algcwithms become quite complicated, and a more precise description of feature structures is desir- able. We have developed a model in which de- scriptions of feature structures can be regarded as logical formulas, and interpreted by sets of di- rected graphs which satisfy them. These graphs are, in fact, transition graphs for a special type of deterministic finite automaton. This semantics for feature structures extends the ideas of Pereira and Shieber [11], by providing an interpretation for values which are specified by disjunctions and path values embedded within disjunctions. Our interpretati6n differs from that of Pereira and Shieber by using a logical model in place of a denotational semantics. This logical model yields a calculus of equivalences, which can be used to simplify formulas. Unification is attractive, because of its gener- ality, but it is often computations/]), inefficient. Our mode] allows a careful examination of the computational complexity of unification. We have shown that the consistency problem for for- mulas with disjunctive values is NP-complete. To deal with this complexity, we describe how dis- junctive values can be specified in a way which delays expansion to disjunctive normal form. 1 Background: Unification in Grammar Several different approaches to natural lan- guage grammar have developed the notion of feature structures to describe linguistic objects. These approaches include linguistic theories, such as Generalized Phrase Structure Grammar (GPSG) [2], Lexical Functional Grammar (LFG) [4], and Sys- temic Grammar [3]. They also include grammar formalisms which have been developed as com- putational tools, such as Functional Unification Grammar (FUG) [7], and PATR-II [14]. In these computational formalisms, unificat/on is the pri- mary operation for matching and combining fea- ture structures. Feature structures are called by several differ- ent names, including f-structures in LFG, and functional descriptiona in FUG. Although they differ in details, each approach uses structures containing sets of attributes. Each attribute is composed of a label/value pair. A value may be an atomic symbol, hut it may also be a nested feature structure. The intuitive interpretation of feature struc- tures may be clear to linguists who use them, even in the absence of a precise definition. Of- ten, a precise definition of a useful notation be- comes possible only after it has been applied to the description of a variety of phenomena. Then, greater precision may become necessary for clari- fication when the notation is used by many differ- ent investigators. Our model has been developed in the context of providing a precise interpreta- tion for the feature structures which are used in FUG and PATR-II. Some elements of this logi- cal interpretation have been partially described in Kay's work [8]. Our contribution is to give a more complete algebraic account of the logi- cal properties of feature structures, which can be used explicitly for computational manipulation and mathematical analysis. Proofs of the math- ematical soundness and completeness of this log- ical treatment, along with its relation to similar logics, can be found in [12]. 2 Disjunction and Non-Local Values Karttunen [5] has shown that disjunction and negation are desirable extensions to PATR-II which are motivated by a wide range of linguistic 257 die : ,,¢reement : number : 8ia¢ aumber : pl ] Figure 1: A Feature Structure containing Value Disjunction. phenomena. He discusses specifying attributes by disjunctive values, as shown in Figure 1. A ~alue disjuactioa specifies alternative values of a single attribute. These alternative values may be either atomic or complex. Disjunction of a more gen- eral kind is an essential element of FUG. Geaera/ disjunction is used to specify alternative groups of multiple attributes, as shown in Figure 2. Karttunen describes a method by which the ba- sic unification procedure can be extended to han- dle negative and disjunctive values, and explains some of the complications that result from intro- ducing value disjunction. When two values, A and B, are to be unified, and A is a disjunction, we cannot actually unify B with both alternatives of A, because one of the alternatives may become incompatible with B through later unifications. Instead we need to remember .a constraint that at least one of the alternatives of A must remain compatible with B. An additional complication arises when one of the alternatives of a disjunction contains a value which is specified by a non-local path, a situa- tion which occurs frequently in Functional Unifi- cation Grammar. In Figure 2 the obj attribute in the description of the adjunct attribute is given the value < actor >, which means that the obj attribute is to be unified with the value found at the end of the path labeled by < actor > in the outermost enclosing structure. This unifica- tion with a non-local value can be performed only when the alternative which Contains it is the only alternative remaining in the disjunction. Oth- erwise, the case = objective attribute might be added to the value of < actor > prematurely, when the alternative containing adjunct is not used. Thus, the constraints on alternatives of a disjunction must also apply to any non-local val- ues contained within those alternatives. These complications, and the resulting proliferation of constraints, provide a practical motivation for the logical treatment given in this paper. We suggest a solution to the problem of representing non- local path values in Section 5.4. 3 Logical Formulas for Feature Structures The feature structure of Figure 1 can also be represented by a type of logical formula: die = case : (hOrn V acc) A a~'eement : ( (gender : fern A number : sing) V number : pl) This type of formula differs from standard propo- sitional logic in that a theoretically unlimited set of atomic values is used in place of boolean val- ues. The labels of attributes bear a superficial resemblance to modal operators. Note that no information is added or subtracted by rewriting the feature matrix of Figure 1 as a logical formula. These two forms may be regarded as notational variants for expressing the same facts. While fea- ture matrices seem to be a more appealing and natural notation for displaying linguistic descrip- tions, logical formulas provide a precise interpre- tation which can be useful for computational and mathematical purposes. Given this intuitive introduction we proceed to a more complete definition of this logic. 4 A Logical Semantics As Pereira and Shieber [11] have pointed out, a grammatical formalism can be regarded in a way similar to other representation languages. Often it is useful to use a representation language which is disctinct from the objects it represents. Thus, it can be useful to make a distinction between the domain of feature structures and the domain of their descriptions. As we shall see, this distinc- tion allows a variety of notational devices to be used in descriptions, and interpreted in a consis- tent way with a uniform kind of structure. 4.1 Domain of Feature Structures The PATR-II system uses directed acyclic graphs (dags) as an underlying representation for feature structures. In order to build complex feature structures, two primitive domains are re- quired: 258 cat ~ S subj = [ case = nominative ] actor =< sub.7' > voice = passive goal =< subj > cat = pp adjunct = prep = by obj =< actor >= [ case = objective ] mood = declarative ] mood interrogative ] f Figure 2: Disjunctive specification containing non-local values, using the notation of FUG. 1. atoms (A) 2. labels (L) The elements of both domains are symbols, usu- ally denoted by character strings. Attribute I~ belt (e.g., acase~) are used to mark edges in a dag, and atoms (e.g., "gen z) are used as prim- itive values at vertices which have no outgoing edges. A dag may also be regarded as a transition graph for a partially specified deterministic fi- nite automaton (DFA). This automaton recog- nises strings of labels, and has final states which are atoms, as well as final states which encode no information. An automaton is formally described by a tuple .~ = (Q,L, 5,qo, F) where L is the set of labels above, 6 is a partial function from Q × L to Q, and where certain el- ements of F may be atoms from the set A. We require that ~ be connected, acyclic, and have no transitions from any final states. DFAs have several desirable properties as a do- main for feature structures: 1. the value of any defined path can be denoted by a state of the automaton; 2. finding the value of a path is interpreted by running the automaton on the path string; 3. the automaton captures the crucial proper- ties of shared structure: (a) two paths which axe unified have the same state as a value, (b) unification is equivalent to a state- merge operation; 4. the techniques of automata theory become available for use with feature structures. A consequence of item 3 above is that the dis- ," tinction between type identity and token identity it clearly revealed by an automaton; two objects are necessarily the same token, if and only if they are represented by the same state. One construct of automata theory, the Nerode relation, is useful to describe equivalent paths. If #q is an automaton, we let P(A) be the set of all paths of ~4, namely the set {z E L* : 5(q0, z) is defined }. The Nerode relation N(A) is the equivalence relation defined on paths of P(~) by letting 4.2 Domain of Descriptions: Logical Formulas We now define the domain FML of logical for- mulas which describe feature structures. Figure 3 defines the syntax of well formed formulas. In the following sections symbols from the Greek alpha- bet axe used to stand for arbitrary formulas in FML. The formulas NIL and TOP axe intended to convey gno information z and ~inconsistent in- formation s respectively. Thus, NIL corresponds to a unification variable, and TOP corresponds to unification failure. A formula l : ~b would indi- cate that a value has attribute l, which is itself a value satisfying the condition ~b. 259 NIL TOP aEA ~< 191 >,..., < 19, >] where each 19~ E L* l:~bwherelELand~bEFML ¢v¢ Figure 3: The domain, FML, of logical formulas. Conjunction and disjunction will have their or- dinary logical meaning as operators in formulas. An interesting result is that conjunction can be used to describe unification. Unifying two struc- tures requires finding a structure which has all features of both structures; the conjunction of two formulas describes the structures which sat- isfy all conditions of both formulas. One difference between feature structures and their descriptions should be noted. In a feature structure it is required that a particular attribute have a unique value, while in descriptions it is pouible to specify, using conjunction, several val- ues for the same attribute, as in the formula s bj : (19e.so. : 3) ^ s bj: : A feature structure satisfying such a description will contain a unique value for the attribute, which can be found by unifying all of the values that are specified in the description. Formulas may also contain sets of paths, de- noting equivalence classes. Each element of the set represents an existing path starting from the initial state of an automaton, and all paths in the set are required to have a common endpoint. If E = I< z >, < y >~, we will sometimes write E as < z >=< y >. This is the notation of PATR- II for pairs of equivalent paths. In subsequent sections we use E (sometimes with subscripts) to stand for a set of paths that belong to the same equivalence class. 4.3 Interpretation of Formulas We can now state inductively the exact con- ditions under which an automaton Jl satisfies a formula: 1. A ~ NIL always; 2. 11 ~ TOP never; 3. /l ~ a ¢=~ /I is the one-state automaton a with no transitions; 4. A ~ E ¢=~ E is a subset of an equivalence class of N(~); 5. A ~ l : cb ¢=~ A/l is defined and A/I ~ ~; where ~/I is defined by a subgraph of the au- tomaton A with start state 5(qo, l), that is ira = (Q,L, 6, qo, F), then .~/l = (Q', L, 6, 6(qo, l), f'); where Qi and F' are formed from Q and F by removing any states which are unreachable from 6(q0, 0. Any formula can be regarded as a specification for the set of automata which satisfy it. In the case of conjunctive formulas (containing no oc- curences of disjunction) the set of automata satis- fying the formula has a unique minimal element, with respect to subsumption.* For disjunctive formulas there may be several minimal elements, but always a finite number. 4.4 Calculus of Formulas It is possible to write many formulas which have an identical interpretation. For example, the formulas given in the equation below are satisfied by the same set of automata. case : (gen V ace V dat) A case : ace = case : ace In this simple example it is clear that the right side of the formula is equivalent to the left side, and that it is simpler. In more complex examples it is not always obvious when two formulas are equivalent. Thus, we are led to state the laws of equivalence shown in Figure 4. Note that equiv- alence (26) is added only to make descriptions of cyclic structures unsatisfiable. 1A subsumption order can be defined for the domain of automata, just as it is defined for dags by Shieber [15]. A formal definition of subsurnption for this domain ap- pears in [12]. 260 Failure: l : TOP = TOP Conjunction (unification}: ¢ A TOP = TOP CANIL = ~b aAb = TOP, Va, b6Aanda#b aAl:¢ = TOP /:¢AZ:,#, = t:(¢A¢) Disjunction: ¢ v NIL = NIL ¢vTOP = z:¢v~:¢ = t:(¢v¢) Commutative: ¢A¢ = ¢^¢ ¢v¢ = ¢v¢ Associative: (¢^¢)^x = ¢^(¢^x) (¢v¢)vx = ¢,v(¢vx) Idempotent: ¢A~ = ~b 4 v 4 = @ Distributive: (~v¢)^x = (~^x) v(¢^x) (~,A¢)Vx = (~VX)^(¢VX) Absorption: (¢A¢)V~ = ~, (¢v¢)A¢ = 4, Path Equivalence: E1 AE2 E, ^ E2 EAz:c E l:E {,) E ----- E2 whenever E1 _C E2 = E1 ^ (E2 u{zy I ~ e El}) for any y such that 3z : z ~ El and zy E E2 -- EA(A y:c) wherexeE glEE = E A {z} if" z is a prefix of a string in E = NIL = TOP for any E such that there are strings z, zy E E and y # e (1) (2) (3) (4) (s} (6} (7) is) (9) (1o) (11) (n) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (2s) (26) Figure 4: Laws of Equivalence for Formulas. 261 5 Complexity of Disjunctive Descriptions To date, the primary benefit of using logical formulas to describe feature structures has been the clarification of several problems that arise with disjunctive descriptions. 5.1 NP-completeness of consistency problem for formulas One consequence of describing feature struc- tures by logical formulas is that it is now rel- atively easy to analyse the computational com- plexity of various problems involving feature structures. It turns out that the satisfiability problem for CNF formulas of propositional logic can be reduced to the consistency (or satisfia- bility) problem for formulas in FML. Thus, the consistency problem for formulas in FML is NP- complete. It follows that any unification algo- rithm for FML formulas will have non-polynomial worst-case complexity (provided P ~ NP!), since a correct unification algorithm must check for consistency. Note that disjunction is the source of this com- plexity. If disjunction is eliminated from the do- main of formulas, then the consistency problem is in P. Thus systems, such as the original PATR-II, which do not use disjunction in their descriptions of feature structures, do not have to contend with this source of NP-completeness. 5.2 Disjunctive Normal Form A formula is in disjt, neti,~s normal form (DNF) if and only if it has the form ~1 V ... v ~bn, where each ~i is either 1. sEA 2. ~bx A ... A ~bm, where each ~bl is either (a) lx : ... : lk : a, where a E A, and no path occurs more than once (b) [< pl >,...,< p~ >], where each p~ E L*, and each set denotes an equivalence class of paths, and all such sets disjoint. The formal equivalences given in Figure 4 al- low us to transform any satisfiable formula into its disjunctive normal form, or to TOP if it is not satisfiable. The algorithm for finding a nor- mal form requires exponential time, where the exponent depends on the number of disjunctions in the formula (in the worst case). 5.3 Avoiding expansion to DNF Most of the systems which are currently used to implement unification-based grammars depend on an expansion to disjunctive normal form in order to compute with disjunctive descriptions. 2 Such systems are exemplified by Definite Clause Grammar [10], which eliminates disjunctive terms by multiplying rules which contain them into al- ternative clauses. Kay's parsing procedure for Functional Unification Grammar [8] also requires expanding functional descriptions to DNF before they are used by the parser. This expansion may not create much of a problem for grammars con- tainlng a small number of disjunctions, but if the grammar contains 100 disjunctions, the expan- sion is clearly not feasible, due to the exponential sise of the DNF. Ait-Kaci [1] has pointed out that the expan- sion to DNF is not always necessary, in work with type structures which are very similar to the fea- ture structures that we have described here. Al- though the NP-completeness result cited above indicates that any unification algorithm for dis- junctive formulas will have exponential complex- ity in the worst case, it is possible to develop algo- rithms which have an average complexity that is less prohibitive. Since the exponent of the com- plexity function depends on the number of dis- junctions in a formula, one obvious way to im- prove the unification algorithm is to reduce the number of disjunctions in the formula be/ors ez- pan.sion to DNF. Fortunately the unification of two descriptions frequently results in a reduction of the number of alternatives that remain consis- tent. Although the fully expanded formula may be required as a final result, it is expedient to de- lay the expansion whenever possible, until after any desired unifications are performed. The algebraic laws given in Figure 4 provide a sound basis for simplifying formulas contain- ing disjunctive values without expanding to DNF. Our calculus differs from the calculus of Ait- Kaci by providing a uniform set of equivalences for formulas, including those that contain dis- junction. These equivalences make it possible to ~ 2One exception is Karttunen's implementation, which was described in Section 2, but it handles only value disjunctions, and does not handle non-local path values embedded within disjunctions. 262 eliminate inconsistent terms before expanding to DNF. Each term thus eliminated may reduce, by as much as half, the sise of the expanded formula. 5.4 Representing Non-local Paths The logic contains no direct representation for non-local paths of the type described in Sec- tion 2. The reason is that these cannot be in- terpreted without reference to the global con- text of the formula in which they occur. Recall that in Functional Unification Grammar a non- local path denotes the value found by extracting each of the attributes labeled by the path in suc- cessively embedded feature structures, beginning with the entire structure currently under consid- eration. Stated formally, the desired interprets- tion of I :< p > is A~l:<p> in the context of~ 3B ~ and 3wEL* : E/to = A and 5(qo,, l) = 5(qo, ,p). This interpretation does not allow a direct com- parison of the non-local path value with other values in the formula. It remains an unknown quantity unless the environment is known. Instead of representing non-local paths directly in the logic, we propose that they can be used within a formula as a shorthand, but that all paths in the formula must be expanded before any other processing of the formula. This path expansion is carried out according to the equiva~ lences 9 and 6. After path expansion all strings of labels in a formula denote transitions from a common origin, so the expressions containing non-local paths can be converted to the equivalence class notation, using the schema 11 :... :In :<p> = [<11 .... ,In >,<p >]. Consider the passive voice alternative of the de- scription of Figure 2, shown here in Figure 5. This description is also represented by the first formula of Figure 6. The formulas to the right in Figure 6 are formed by 1. applying path expansion, 2. converting the attributes containing non- local path values to formulas representing equivalence classes of paths. By following this procedure, the entire functional description of Figure 2 can be represented by the logical formula given in Figure 7. voice = passive goal =< subj > cat = pp prep = by adjenct = obj =< actor > = [ case----objective ] Figure 5: Functional non-local values. voice : passive ^ goal :< subj > ^ adjunct : (eat : pp ^ prep : by ^ obj :< actor > ^ obj : ease : objective) Description containing path expansion voice : passive ^ goal :< sub3" > ^ adjunct : eat : pp ^ adjunct : prep : by ^ adjunct : obj :< actor > ^ adjunct : obj : ease : objective path equivalence ==~ voice : passive ^ [< goat >, < subj >] ^ adjunct : cat : pp /~ adjunct : prep : by ^ [< adjunct obj >, < actor >] ^ adjunct : obj : case : objective Figure 6: Conversion of non-local values to equiv- alence classes of paths. 263 cat : s A subj : case : nominative A ((vdce : ac~ve ^ [< acto,. >, < subj >i) V (voice : pas~ve ^ |< goal >, < subj >] A adjunct : cat : pp A adjunct : prep : by A [< adjunct obj >, < actor >] ^ adjunct : obj : case : objective)} ^ (mood : declarative V mood : interrogative) Figure 7: Logical formula representing the de- scription of Figure 2. It is now possible to unify the description of Figure 7 (call this X in the following discus- sion) with another description, making use of the equivalence classes to simplify the result. Con- sider unifTing X with the description Y = actor : case : nominative. The commutative law (10) makes it possible to unify Y with any of the conjuncts of X. If we unify Y with the disjunction which contains the vo/ce attributes, we can use the distributive law (16) to unify Y with both disjuncts. When Y is unified with the term containing [< adjunct obj >, < actor >], the equivalence (22) specifies that we can add the term adjunct : obj : case : nominative. This term is incompatible with the term adjunct : obj : case : objective, and by applying the equivalences (6, 4, 1, and 2) we can transform the entire disjunct to TOP. Equivalence (8) specifies that this disjunction can be eliminated. Thus, we are able to use the path equivalences during unification to reduce the number of disjunctions in a formula without ex- panding to DNF. Note that path expansion does not require an expansion to full DNF, since disjunctions are not multiplied. While the DNF expansion of a for- mula may be exponentially larger than the origi- nal, the path expansion is at most quadratically larger. The size of the formula with paths ex- panded is at most n x p, where n is the length of the original formula, and p is the length of the longest path. Since p is generally much less than n the size of the path expansion is usually not a very large quadratic. 5.5 Value Disjunction and General Disjunction The path expansion procedure illustrated in Figure 6 can also be used to transform formulas containing value disjucntion into formulas con- taining general disjunction. For the reasons given above, value disjunctions which contain non-local path expressions must be converted into general disjunctions for further simplification. While it is possible to convert value disjunc- tions into general disjunctions, it is not always possible to convert general disjunctions into value disjunctions. For example, the first disjunction in the formula of Figure 7 cannot be converted into a value disjunction. The left side of equiva- lence (9) requires both disjuncts to begin with a common label prefix. The terms of these two disjuncts contain several different prefixes (voice, actor, subj, goat, and adjunct), so they cannot be combined into a common value. Before the equivalences of section 4 were formu- lated, the first author attempted to implement a facility to represent disjunctive feature structures with non-local paths using only value disjunction. It seemed that the unification algorithm would be simpler if it had to deal with disjuncti+ns only in the context of attribute values, rather than in more general contexts. While it w~ possi- ble to write down grammatical definitions using only value disjunction, it was very difficult to achieve a correct unification algorithm, because each non-local path was much like an unknown variable. The logical calculus presented here clearly demonstrates that a representation of gen- eral disjunction provides a more direct method to determine the values for non-local paths. 264 6 Implementation The calculus described here is currently being implemented as a program which selectively ap- plies the equivalences of Figure 4 to simplify for- mulas. A strategy (or algorithm) for simplifying formulas corresponds to choosing a particular or- der in which to apply the equivalences whenever more than one equivalence matches the form of the formula. The program will make it possi- ble to test and evaluate different strategies, with the correctness of any such strategy following di- rectly from the correctness of the calculus. While this program is primarily of theoretical interest, it might yield useful improvements to current meth- ods for processing feature structures. The original motivation for developing this treatment of feature structures came from work on an experimental parser based on Nigel [9], a large systemic grammar of English. The parser is being developed at the USC/Information Sciences Institute by extending the PATR-II system of SRI International. The systemic grammar has been translated into the notation of Functional Uni- fication Grammar, as described in [6]. Because this grammar contains a large number (several hundred) of disjunctions, it has been necessary to extend the unification procedure so that it han- dles disjunctive values containing non-local paths without expansion to DNF. We now think that this implementation of a relatively large grammar can be made more tractable by applying some of the transformations to feature descriptions which have been suggested by the logical calculus. 7 Conclusion We have given a precise logical interpreta- tion for feature structures and their descriptions which are used in unification-based grammar for- malisms. This logic can be used to guide and im- prove implementations of these grammmm, and the processors which use them. It has allowed a closer examination of several sources of com- plexity that are present in these grammars, par- ticularly when they make use of disjunctive de- scriptions. We have found a set logical equiva- lences helpful in suggesting ways of coping with this complexity. It should be possible to augment this logic to include characterizations of negation and implica- tion, which we are now developing. It may also be worthwhile to integrate the logic of feature struc- tures with other grammatical formalisms based on logic, such as DCG [10] and LFP [13]. References [1] Ait-Kaci, H. A New Model of Computa- tion Based on a Calculus of Type Subsump- tion. PhD thesis, University of Pennsylva- nia, 1984. [2] Gazdar, G., E. Klein, G.K. Pullum, and I.A. Sag. Generalized Phrase Structure Gram- mar. BlackweU Publishing, Oxford, Eng- land, and Harvard University Press, Cam- bridge, Massachusetts, 1985. [3] G.R. Kress, editor. Halliday: System and Function in Language. Oxford University Press, London, England, 1976. [4] Kaplan, R. and J. Bresnan. Lexical Func- tional Grammar: A Formal System for Grammatical Representation. In J. Bresnan, editor, The Mental Representation of Gram- matical Relations. MIT Press, Cambridge, Massachusetts, 1983. [5] Karttunen, L. Features and Values. In Pro- ceedings of the Tenth International Confer- ence on Computational Linguistics, Stanford University, Stanford, California, July 2-7, 1984. [6] Kasper, R. Systemic Grammar and Func- tional Unification Grammar. In J. Ben- son and W. Greaves, editors, Proceedings of the I~ h International Systemics Workshop, Norwood, New Jersey: Ablex (forthcoming). [7] Kay, M. Functional Grammar. In Pro- ceedings of the Fifth Annual Meeting of the Berkeley Linguistics Society, Berkeley Lin- guistics Society, Berkeley, California, Febru- ary 17-19, 1979. [8] Kay, M. Parsing in Functional Unification Grammar. In D. Dowty, L. Kartunnen, and A. Zwicky, editors, Natural Language Parsing. Cambridge University Press, Cam- bridge, England, 1985. [9] Mann, W.C. and C. Matthiessen. Nigel: A Systemic Grammar for Text Generation. USC / Information Sciences Institute, RR- 83-105. Also appears in R. Benson and J. Greaves, editors, Systemic Perspectives on Discourse: Selected Papers Papers from the Ninth International Systemics Work- shop, Ablex, London, England, 1985. 265 [10] Pereira, F. C. N. and D. H. D. Warren. Defi- nite clause grammars for language analysis - a survey of the formalism and a comparison with augmented transition networks. Arh'~- ¢ial Intelligence, 13:231-278, 1980. [11] Pereira, F. C. N. and S. M. Shieber. The se- mantics of grammar formalisms seen as com- puter languages. In Proceedings of the Tenth International Conference on Computational Linguist,s, Stanford University, Stanford, California, July 2-7, 1984. [12] Rounds, W. C. and R. Kasper. A Complete Logical Calculus for Record Strucutres Rep- resenting Linguistic Information. Submitted to the ~ymposium on Logic in Computer Sci- ence, to be held June 16-18, 1986. [13] Rounds, W. C. LFP: A Logic for Linguis- tic Descriptions and an Analysis of its Com- plexlty. Submitted to Computational Lir,- Cui.~tics. [14] Shieber, S. M. The design of a computer lan- guage for linguistic information. In Proceed- ing8 o[ t~ Tenth International Con/erence on Computational Linguistics, Stanford Uni- versity, Stanford, California, July 2-7, 1984. [15] Shieber, S. M. An Introduction to Uai~ation-bo~ed Approaches to Grammar. Chicago: University of Chicago Press, CSLI Lecture Notes Series (forthcoming). 266
1986
38
FORUM ON MACHINE TRANSLATION What Should Machine Translation Be? John S. White Siemens Information Systems Linguistics Research Center PO Box 7247 University Station Austin, TX 78712 MODERATOR STATEMENT After a considerable hiatus of interest and funding, machine translation has come in recent years to occupy a sig- nificant place in the discipline of natural language processing. It has also become one of the most visible representations of natural language processing to the outside world. Machine translation systems are relatively unique with respect to the extent of the coverage they attempt, and, correspondingly, the size of the grammatical and lexicaI corpora involved. Ad- ding to this the complexity introduced by multiple language directions into the same system design (and the enormous procedural problems imposed by simultaneous development in several sites) gives some clue as to the optimism which presently exists for machine translation. It is obviously believed in many quarters that computer science and linguistic science have become sufficient for production-environment machine translation. Private sector companies continue to introduce new MT systems to the marketplace worldwide, and many more are venturing into development and implementation. The industrial interest, meanwhile, has been instrumental in opening up possibilities for doing basic research in it, in part because of direct inter- action between industry and research, and in part because of the overall increased awareness. It is indeed worth speculat- ing whether renewed interest shown by governmental scien- tific agencies is related to the level of commercial acceptance. But some feel that this visibility causes more harm than good. The concern has been expressed that an operational failure in machine translation will be seen as a failure in natural language processing generally, that a particular im- plementation rejected by users could cause a snowball ul- timately resulting in the demise not just of MT as in the AL- PAC aftermath, but also of all of computational linguistics. Some may go so far as to suggest that such a day of reckoning will be inevitable as long as production-level machine translation efforts continue. If it is indeed the case that production machine trans- lation is not feasible, then machine translation is at best a heuristic environment for experimentation in linguistic theory. And machine translation does serve such an end ad- mirably well: the modularity of program and linguistic description of which a well-designed translation system is capable allows work on hypotheses within one linguistic theory, or evaluation of different linguistic theories, without fundamental changes to the computing environment. Two positions are identified here, whose distance from each other serves perhaps to encompass the whole range of thought on the ultimate potential of machine translation, as well as on the best possible design of a translating device. The one position holds that MT is a viable production tool whose benefit is more than worth the immense effort in- volved in linguistic description, textual coverage, and coor- dination of multi-national development. The other position holds that MT is a useful laboratory for linguistic study in a small, easily maintainable computing environment. Despite the polarity, there is a common ground, which we employ as the datum point from which to explore the issues in machine translation today. We have progressed from the debate about the possibility of machine translation to the debate about what machine translation should be. This in itself is indicative of our awareness of the progress of com- putational linguistics as a whole. 267
1986
39
RECOVERING IMPLICIT INFORMATION Martha S. Palmer, I)eborah A. Dahl, Rebecca. J. Schiffman, Lynette Hirschlnan, Marcia Linebarger, and John Dowding Research and Development Division SDC -- A Burroughs Company P.O Box 517 Paoli, PA 19301 USA ABSTRACT This paper describes the SDC PUNDIT, (Prolog UNDerstands Integrated Text), system for processing natural language messages. 1 PUNDIT, written in Prolog, is a highly modular system consisting of dis- tinct syntactic, semantic and pragmatics com- ponents. Each component draws on one or more sets of data, including a lexicon, a broad-coverage gram- mar of EngLish, semantic verb decompositions, rules mapping between syntactic and semantic consti- tuents, and a domain model. This paper discusses the communication between the syntactic, semantic and pragmatic modules that is necessary for making implicit linguistic information explicit. The key is letting syntax and semantics recognize missing linguistic entities as implicit enti- ties, so that they can be labelled as such, and refer- enee resolution can be directed to find specific referents for the entities. In this way the task of making implicit linguistic information explicit becomes a subset of the tasks performed by reference resolution. The success of this approach is depen- dent on marking missing syntactic constituents as elided and missing semantic roles as ESSENTIAL so that reference resolution can know when to look for referents. 1. Introduction This paper describes the SDC PUNDIT 2 system for processing natural language messages. PUNDIT, written in Prolog, is a highly modular system consist- ing of distinct syntactic, semantic and pragmatics components. Each component draws on one or more sets of data, including a lexicon, a broad-coverage grammar of English, semantic verb decompositions, rules mapping between syntactic and semantic con- stituents, and a donlain model. PUNDIT has been developed cooperatively with the NYU PROTEUS system (Prototype Text Understanding System), These systems are funded by DARPA as part of the I This work is supported in part by DARPA under contract N00014-85- C-0012, administered by the Office of Naval Research. APPROVED FOR PUB- LIC RELEASE, DISTRIBUTION UNLIMITED. 2 Prolog UNDderstands Integrated Text work in natural language understanding for the Strategic Computing Battle Management Program. The PROTEUS/PUNDIT system will map Navy CASREP's (equipment casualty reports) into a data- base, which is accessed by an expert system to deter- mine overall fleet readiness. PUNDIT has also been applied to the domain of computer maintenance reports, which is discussed here. The paper focuses on the interaction between the syntactic, semantic and pragmatic modules that is required for the task of making implicit informa- tion explicit. We have isolated two types of implicit entities: syntactic entities which are missing syntac- tic constituents, and semantic entities which are unfilled semantic roles. Some missing entities are optional, and can be ignored. Syntax and semantics have to recognize the OBLIGATORY missing entities and then mark them so that reference resolution knows to find specific referents for those entities, thus making the implicit information explicit. Refer- ence resolution uses two different methods for filling the different types of entities which are also used for general noun phrase reference problems. Implicit syntactic entities, ELIDED CONSTITUENTS, are treated like pronouns, and implicit semantic entities, ESSEN- TIAL ROLES are treated like definite noun phrases. The pragmatic module as currently implemented con- sists mainly of a reference resolution component, which is sptflcient for the pragmatic issues described in this paper. We are in the process of adding a time module to handle time issues that have arisen during the analysis of the Navy CASREPS. 2. The Syntactic Component The syntactic component has three parts: the grammar, a parsing mechanism to execute the gram- mar, and a lexicon. The grammar consists of context-free BNF definitions (currently nulnbering approximately 80) and associated restrictions (approximately 35). The restrictions enforce context- sensitive welt-formedness constraints and, in some cases, apply optimization strategies to prevent unnecessary structure-building. Each of these three parts is described further below. 10 2.1. Grammar Coverage The grammar covers declarative sentences, ques- tions, and sentence fragments. The rules for frag- ments enable the grammar to parse the 'telegraphic" style characteristic of message traffic, such as disk drive down, and has select lock. The present gram- mar parses sentence adjuncts, conjunction, relative clauses, complex complement structures, and a wide variety of nominal structures, including compound nouns, nominalized verbs and embedded clauses. The syntax produces a detailed surface structure parse of each sentence (where '~entence" is under- stood to mean the string of words occurring between two periods, whether a full sentence or a fragment). This surface structure is converted into an 'qnter- mediate representation" which regularizes the syntac- tic parse. That is, it eliminates surface structure detail not required for the semantic tasks of enforc- ing selectional restrictions and developing the final representation of the information content of the sen- tence. An important part of regularization involves mapping fragment structures onto canonical verb- subject-object patterns, with missing elements flagged. For example, the tvo fragment consists of a tensed verb + object as in Replaced spindle motor. Regularization of this fragment, for example, maps the tvo sYntactic structure into a verb+ subject+ object structure: verb(replace),subject(X),object(Y) As shown here, verb becomes instantiated with the surface verb, e.g., replace while the arguments of the subject and object terms are variables. The semantic information derived from the noun phrase object spindle motor becomes associated with Y. The absence of a surface subject constituent results in a lack of semantic information pertaining to X. This lack causes the semantic and pragmatic com- ponents to provide a semantic filler for the missing subject using general pragmatic principles and specific domain knowledge. 2.2. Parsing The grammar uses the Restriction Grammar parsing framework [Hirschman1982, Hirschman1985], which is a logic grammar with facilities for writing and maintaining large grammars. Restrict:on Gram- mar is a descendent of Sager's string grammar [Sager1981]. It uses a top-down left-to-right parsing strategy, augmented by dynamic rule pru, ing for efficient parsing [Dowding1986]. In addition, it Llses a meta:grammatical approach to generate definitions for a full range of co-ordlnate conjunction structures [Hirschman1986]. 2.3. Lexical Processing The lexicon contains several ~housand entries related to the particular subdomain of equipment maintenance. It is a modified version of the LSP lexi- con with words classified as to part of speech and subcategorized in limited ways (e.g., verbs are sub- categorized for their complement types). It also han- dles multi-word idioms, dates, times and part numbers. The lexicon can be expanded by means of an interactive lexical entry program. The lexical processor reduces morphological vari- ants to a single root form which is stored with each entry. For example, the form has is transformed to the root form have in Has select lack. In addition, this facility is useful in handling abbreviations: the term awp is regularized to the multi-word expression waiting ~for ^part. This expression in turn is regular- ized to the root form wait'for'part which takes as a direct object a particular part or part number, as in is awp 2155-6147. Multi-word expressions, which are typical of jar- gon in specialized domains, are handled as single lexi- col items. This includes expressions such as disk drive or select lock, whose meaning within a partic- ular domain is often not readily computed from its component parts. Handling such frozen expressions as '~dioms" reduces parse times and number of ambi- guities. Another feature of the lexical processing is the ease with which special forms (such as part numbers or dates) can be handled. A special '$orms grammar", written as a definite clause grammar[Pereira1980] can parse part numbers, as in awaiting part 2155- 6147, or complex date and time expressions, as in disk drive up at 11/17-1286. During parsing, the forms grammar performs a well-formedness check on these expressions and assigns them their appropriate lexical category. 3. Semantics There are two separate components that per- form semantic analysis, NOUN PHRASE SEMANTICS and CLAUSE SEMANTICS. They are each called after parsing the relevant syntactic structure to test semantic well-formedness while producing partial semantic representations. Clause semantics is based on Inference Driven Semantic Analysis [P~tlmer1985] which decomposes verbs into component meanings and fills their semantic roles with syntactic consti- tuents. A KNOWLEDGE BASE, the formalization of each domain into logical terms, SEMANTIC PREDI- CATES, is essential for the effective application of Inference Driven Semantic Analysis, and for the final production of a text representation. The result of the semantic analysis is a set of PARTIALLY instantiated ll Semantic predicates which is similar to a frame representation. To produce this representation, the semantic components share access to a knowledge base, the DOMAIN MODEL, that contains generic descriptions of the domain elements corresponding to the ]exical entries. The model includes a detailed representation of the types of assemblies that these elements can occur in. The semantic components are designed to work independently of the particular model, and rely on an interface to ensure a well- defined interaction with the domain model. The domain model, noun phrase semantics and clause semantics are all explained in more detail in the fol- lowing three subsections. 3.1. Domain Model The domain currently being modelled by SDC is the Maintenance Report domain. The texts being analyzed are actual maintenance reports as they are called into the Burroughs Telephone Tracking Sys- tem by the field engineers and typed in by the tele- phone operator. These reports give information about the customer who has the problem, specific symptoms of the problem, any actions take by the field engineer to try and correct the problem, and success or failure of such actions. The goal of the text analysis is to automatically generate a data base of maintenance information that can be used to correlate customers to problems, problem types to machines, and so on. The first step in building a domain model for maintenance reports is to build a semantic net-like representation of the type of machine involved. The machine in the example text given below is the B4700. The possible parts of a B4700 and the associ- ated properties of these parts can be represented by an isa hierarchy and a haspart hierarchy. These hierarchies are built using four basic predicates: system,lsa,hasprop, haspart. For example the system itself is indicated by system(b4700). The isa predicate associates TYPES with components, such as isa(splndle^motor~motor). Properties are associated with components using the hasprop relationship, are are inherited by anything of the same type. The main components of the system: cpu, power_supply, disk, printer, peripherals, etc., are indicated by haspart rela- tions, such as haspart(b4700,cpu), haspart(b4700,power_supply), haspart(b4700,dlsk),,etc. These parts are them- selves divided into subparts which are also indicated by haspart relations, such as haspart(power_supply, converter). This method of representation results in a gen- eral description of a computer system. Specific machines represent INSTANCES of this general representation. When a particular report is being processed, id relations are created by noun phrase semantics to associate the specific computer parts being mentioned with the part descriptions from the general machine representation. So a particular B4700 would be indicated by predicates such as these: id(b4700,systeml), id(cpu,cpul), id(power_supply,power supply1), etc. 3.2. Noun phrase semantics Noun phrase semantics is called by the parser during the parse of a sentence, after each noun phrase has been parsed. It relies hea~iiy on th- domain model for both determining semantic well formedness and building partial semantic representa- tions of the noun phrases. For example, in the ,~cn- tence, field engineer replaced disk drive at 11/2/0800, the phrase disk drive at 11/2/0800 is a syntactically acceptable noun phrase, (as in participants at the meeting). However, it is not semantically acceptable in that at 11/20/800 is intended to designate the time of the replacement, not a property of the disk drive. Noun phrase semantics will inform the parser that the noun phrase is not semantically acceptable, and the parser can then look for another parse, In order for this capability to be fully utilized, however, an exten- sive set of domaln-speclfic rules about semantic acceptability is required. At present we have only the minimal set used for the development: of the basic mechanism. For example, in the case described here, at 11/2/0800 is excluded as a modifier for disk drive by a rule that permits only the name of a loca- tion as the object of at in a prepositional phrase modifying a noun phrase. Tile second function of noun phrase semantics is to create a semantic representation of the noun phrase, which will later be operated on by reference resolution. For example, the semantics for lhe bad disk drive would be represented by the following Prolog clauses. lid(disk ^ drive,X), bad(X), del'(X), that is, X was referred to with a full, definite noun phrase, full_np (X)] rather than a pronoun or indefinite noun phrase. 12 8.3. Clause semantics In order to produce the correct predicates and the correct instantiations, the verb is first decom- posed into a semantic predicate representation appropriate for the domain. The arguments to the predicates constitute the SEMANTIC ROLES of the verb, which are similar to cases. There are domain specific criteria for selecting a range of semantic roles. In this domain the semantic roles include: agent,lnstrument,theme, objectl,object2, symptom and mod. Semantic roles can be filled either by a syntactic constituent supplied by a map- ping rule or by reference resolution, requiring close cooperation between semantics and reference resolu- tion. Certain semantic roles are categorized as ESSENTIAL, so that pragmatics knows that they need to be filled if there is no syntactic constituent avail- able. The default categorization is NON-ESSENTIAL, which does not require that the role be filled. Other semantic roles are categorized as NON-SPECIFIC or SPECIFIC depending on whether or not the verb requires a specific referent for that semantic role (see Section 4). The example given in Section 5 illus- trates the use of both a non-specific semantic role and an essential semantic role. This section explains the decompositions of the verbs relevant to the example, and identifies the important semantic roles. The decomposition of have is very domain specific. have(time(Per)) <- symptom(object 1(O 1),symptom(S),time(Per)) It indicates that a particular symptom is associ- ated with a particular object, as in 'the disk drive has select lock." The object1 semantic role would be filled by the disk drive, the subject of the clause, and the symptom semantic role would be filled by select lock, the object of the clause. The tlme(Per) is always passed around, and is occasion- ally filled by a time adjunct, as in the disk drive had select lock at 0800. In addition to the mapping rules that are used to associate syntactic constituents with semantic roles, there are selection restrictions associated with each semantic role. The selection restrictions for have test whether or not the filler of the objectl role is allowed to have the type of symptom that fills the symptom role. For example, only disk drives have select locks. Mapping Rules The decomposition of replace, is also a very domain specific decomposition that indicates that an agent can use an instrument to exchange two objects. replace(tinm(Per)) <- cause(agent(A), use(instrument(Z), exchange(object 1(O 1),obj ect2(O2),time(Per)~ The following mapping rule specifies that the agent can be indicated by the subject of the clause. agent(A) <-subject(A) / X The mapping rules make use of intuitions about syntactic cues for indicating semantic roles first embodied in the notion of case [Fillmore1968,Palmer1981]. Some of these cues are quite general, while other cues are very verb-specific. The mapping rules can take advantage of generali- ties like 'SUBJECT to AGENT" syntactic cues while still preserving context sensitivities. This is accomplished by making the application of the map- ping rules 'hituation-specific" through the use of PREDICATE ENVIRONMENTS. The previous rule is quite general and can be applied to every agent semantic role in this domain. This is ~ndicated by the X on the right hand side of the "/" which refers to the predicate environment of the agent, i.e., any- thing. Other rules, such as %VITH-PP to OBJECT2," are much less general, and can only apply under a set of specific circumstances. The predicate environ- ments for an objectl and object2 are specified more explicitly. An objectl can be the object of the sentence if it is contained in the semantic decomposition of a verb that includes an agent and belongs to the repair class of verbs. An object2 can be indicated by a with prepositional phrase if it is contained in the semantic decomposi- tion of a replace verb: objectl(Partl) <- obj(eartl)/ cause(agent(A),Repa object2(Part2) <- pp(with,Part2) / cause(agent(A),use(I,exchange(object 1 (O 1),obj e¢ Selection Restrietlons The selection restriction on an agent is that it must be a field engineer, and an instrument must be a tool. The selection restrictions on the two objects are more complicated, since they must be machine parts, have the same type, and yet also be distinct objects. In addition, the first object must already be associated with something else in a haspart relationship, in other words it must already be included in an existing assembly. :The opposite must be true of the second object: it must not already be included in an assembly, so it must not be associated with anything else in a haspart relation- ship. 13 There is also a pragmatic restriction associated with both objects that has not been associated with any of the semantic roles mentioned previously. Both object1 and object2 are essential semantic roles. Whether or not they are mentioned explicitly in the sentence, they must be filled, preferably b:¢ an an entity that has already been mentioned, but if not that, then entities will be created to fill them [Pal- mer1983]. This is accomplished by making an expli- cit cull to reference resolution to find referents for essential semantic roles, in the same way that refer- ence resolution is called to find the referent of a noun phrase. This is not done for non-essential roles, such as the agent and the instrument in the same verb decomposition. If they are not mentioned they are simply left unfilled. The instrument is rarely men- tioned, and the agent could easily be left out, as in The disk drive was replaced at 0800. 3 In other domains, the agent might be classified as obligatory, and then it wold have to be filled in. There is another semantic role that has an important pragmatic restriction on it in this example, the object2 semantic role in wait'for Apart (awp). idiomVerb(wait ^ for ^ part,time(Per)) <- ordered(object 1(O 1),obj ect2(O2),time(Per)) The semantics of wait "for "part indicates that a par- ticular type of part has been ordered, and is expected to arrive. But it is not a specific entity that might have already been mentioned. It is a more abstract object, which is indicated by restrict- ing it to being non-specific. This tells reference reso- lution that although a syntactic constituent, prefer- ably the object, can and should fill this semantic role, and must be of type machine-part, that reference resolution should not try to find a specific referent for it (see Section 4). The last verb representation that is needed for the example is the representation of be. be(time(Per)) <- attribute(theme(T),mod(M),time(Per)) In this domain be is used to associate predicate adjectives or nominals with an object, as in disk drive is up or spindle motor is bad. The representation merely indicates that a modifier is associated with an theme in an attribute relation- ship. Noun phrase semantics will eventually produce the same representation for the bad spindle motor, although it does not yet. 3Note that an elided subject is handled quite differently, as in replaced tliBk tlri=e. Then the missing subject is assumed to fill the agent role, and an appropriate referent is found by reference resolution 4. Reference Resolution Reference resolution is the component which keeps track of references to entities in the discourse. It creates labels for entities when they are first directly referred to, or when their existence is implied by the text, and recognizes subsequent references to them. Reference resolution is called from clause semantics when clause semantics is ready to instan- tiate a semantic role. It is also called from pragmatic restrictions when they specify a referent whose existence is entailed by the meaning of a verb. The system currently covers many cases of singular and plural noun phrases, pronouns, one- anaphora, nominalizations, and non-specific noun phrases; reference resolution also handles adjec- tives, prepositional phrases and possessive pro- nouns modifying noun phrases. Noun phrases with and without determiners are accepted. Dates, part numbers, and proper names are handled as special cases. Not yet handled are compound nouns, quantified noun phrases, conjoined noun phrases, relative clauses, and possessive nouns. The general reference resolution mechanism is described in detail in [Dahl1986]. In this paper the focus will be on the interaction between reference resolution and clause semantics. The next two sec- tions will discuss how reference resolution is affected by the different types of semantic roles. 4.1. Obligatory Constituents and Essential Semantic Roles A slot for a syntactically obligatory constituent such as the subject appears in the intermediate representation whether or not a subject is overtly present in the sentence. It is possible to have such a slot because the absence of a subject is a syntactic fact, and is recognized by the parser. Clause seman- tics calls reference resolution for such an implicit constituent in the same way that it calls reference resolution for explicit cqnstituents. Reference resolu- tion treats elided noun phrases exactly as it treats pronouns, that is by instantiating them to the first member of a list of potential pronominal referents, the FocusList. The general treatment of pronouns resembles that of[Sidnerl979], although there are some important differences, which are discussed in detail in [Dahl1986]. The hypothesis that elided noun phrases can be treated in much the same way as pronouns is consistent with previous claims by [Gunde11980], and [Kameyama1985], that in languages which regularly allow zero-np's, the zero corresponds to the focus. If these claims are correct, it is not surprising that in a sublanguage that allows zero-np's, the zero should also correspond to the fOCUS. 14 After control returns to clause semantics from reference resolution, semantics checks the selectional restrictions for that referent in that semantic role of that verb. If the selectional restrictions fail, back- tracking into reference resolution occurs, and the next candidate on the FocusList is instantiated as the referent. This procedure continues until a referent satisfying the selectional restrictions is found. For example, in Disk drive is down. Has select lock, the system instantiates the disk drive, which at this point is the first member of the FocusList, as the objectl of have: [event39] have(time(tlmel)) symptom(objectl([drivel0]), symptom([locklT]), time(tlmel)) Essential roles might also not be expressed in the sentence, but their absence cannot be recognized by the parser, since they can be expressed by syntactically optional constituents. For example, in the field engineer replaced the motor., the new replacement motor is not mentioned, although in this domain it is classified as semantically essential. With verbs like replace, the type of the replacement, motor, in this case, is known because it has to be the same type as the replaced object. Reference resolu- tion for these roles is called by pragmatic rules which apply when there is no overt syntactic constituent to fill a semantic role. Reference resolution treats these referents as if they were full noun phrases without determiners. That is, it searches through the context for a previously mentioned entity of the appropriate type, and if it doesn't find one, it creates a new discourse entity. The motivation for treating these as full noun phrases is simply that there is no reason to expect them to be in focus, as there is for elided noun phrases. 4.2. Noun Phrases in Non-Speclfie Con- texts Indefinite noun phrases in contexts like the field engineer ordered a disk drive are generally associ- ated with two readings. In the specific reading the disk drive ordered is a particular disk drive, say, the one sitting on a certain shelf in the warehouse. In the non-specific reading, which is more likely in this sen- tence, no particular disk drive is meant; any disk drive of the appropriate type will do. Handling noun phrases in these contexts requires careful integration of the interaction between semantics and reference resolution, because semantics knows about the verbs that create non-specific contexts, and reference reso- lution knows what to do with noun phrases in these contexts. For these verbs a constraint is associated with the semantics rule for the semantic role object2 which states that the filler for the object2 must be non-specific. 4 This constraint is passed to reference resolution, which represents a non-specific noun phrase as having a variable in the place of the pointer, for example, id(motor~X). Non-specific semantic roles can be illustrated using the object2 semantic role in wait~for^part (awp). The part that is being awaited is non- specific, i.e., can be any part of the appropriate type. This tells reference resolution not to find a specific referent, so the referent argument of the id relation- ship is left as an uninstantiated variable. The analysis of fe is awp spindle motor would fill the objectl semantic role with tel from id(fe,fel), and the object2 semantic role with X from id(spindle ~ motor,X), as in ordered(objectl(fel),object2(X)). If the spin- dle motor is referred to later on in a relationship where it must become specific, then reference resolu- tion can instantiate the variable with an appropriate referent such as spindle^motor3 (See Section 5.6). 5. Sample Text: A sentence-by-sentence analysis The sample text given below is a slightly emended version of a maintenance report. The parenthetical phrases have been inserted. The fol- lowing summary of an interactive session with PUN- DIT illustrates the mechanisms by which the syntac- tic, semantic and pragmatic components interact to produce a representation of the text. 1. disk drive (was) down (at) 11/16-2305. 2. (has) select lock. 3. spindle motor is bad. 4. (is) awp spindle motor. 5. (disk drive was) up (at) 11/17-1236. 6. replaced spindle motor. 5.1. Sentence 1: Disk drive was down at 11/16-230G. As explained in Section 3.2 above, the noun phrase disk drive leads to the creation of an id a]" the form: id(dlsk~drlve,[drlvel]) Because'dates and names generally refer to unique entities rather than to exemplars of a general type, their ids do not contain a type argument: date([ll/16- 1100]),name([paon]). 4 The specific reading is not available at present, since it is considered to be unlikely to occur in this domain. 15 The interpretation of the first sentence of the report depends on the semantic rules for the predi- cate be. The rules for this predicate specify three semantic roles, an theme to whom or which is attri- buted a modifier, and the time. After a mapping rule in the semantic component of the system instan- tiates the theme semantic role with the sentence subject, disk drive, the reference resolution com- ponent attempts to identify this referent. Because disk drive is in the first sentence of the discourse, no prior references to this entity can be found. Further, this entity is not presupposed by any prior linguist, ic expressions. However, in the maintenance domain, when a disk drive is referred to it can be assumed to be part of a B3700 computer system. As the system tries to resolve the reference of the noun phrase disk drive by looking for previously mentioned disk drives, it finds that the mention of a disk drive presupposes the existence of a system. Since no system has been referred to, a pointer to a system is created at the same time that a pointer to the disk drive is created. Both entities are now available for future refer- ence. In like fashion, the propositional content of a complete sentence is also made available for future reference. The entities corresponding to propositions are given event labels; thus eventl is the pointer to the first proposition. The newly created disk drive, system and event entities now appear in the discourse information in the form of a llst along with the date. id(event,[eventl]) id(dlsk ^ drive, [drivel]) date([11/le-2305]) id(system, [system1]) Note however, that only those entities which have been explicitly mentioned appear in the FocusList: FocusList: [[event1], [drlvel], [11/16-2305]] The propositional entity appears at the head of the focus list followed by the entities mentioned in full noun phrases.fi In addition to the representation of the new event, the pragmatic information about the develop- ing discourse now includes information about pa'rt- whole relationships, namely that drivel is a part which is contained in systeml. Part-Whole Relationships: haspart([systeml],[drivel]) The complete representation of eventl, appearing in the event list in the form shown below, indicates that at the time given in the prepositional phrase at 11/16-2505 there is a state of affairs denoted as eventl in which a particular disk drive, i.e., drivel, can be described as down. [eventl] be(time([ll/1B-2305])) attrlbute(theme([drivel] }, mod(down),time([ll/16-230G])) 5.2. Sentence 2: Has select lock. The second sentence of the input text is a sen- tence fragment and is recognized as such by the parser. Currently, the only type of fragment which can be parsed can have a missing subject but must have a complete verb phrase. Before semantic analysis, the output of the parse contains, among other things, the following constituent list: [subj([X]),obj([Y])]. That is, the syntactic com- ponent represents the arguments of the verb as vari- ables. The fact that there was no overt subject can be recognized by the absence of semantic information associated with X, as discussed in Section 3.2. The semantics for the maintenance domain sublanguage specilCles that the thematic role instantiated by the direct object of the verb to have must be a symptom of the entity referred to by the subject. Reference resolution treats an empty subject much like a pro- nominal reference, that is, it proposes the first element in the FoeusList as a possible referent. The first proposed referent, eventl is rejected by the semantic selectional constraints associated with the verb have, which, for this domain, require the role mapped onto the subject to be classified as a machine part and the role mapped onto the direct object to be classified as a symptom. Sincethe next item in the FocusList, drivel, is a machine part, it passes the selectional constraint and becomes matched with the empty subject of has select lock. Since no select lock has been mentioned previously, the system creates one. For the sentence as a whole then, two entities are newly created: the select lock ([loekl]) and the new propositional event ([event2]): id(event, [event2]), id(select^lock,[loekl]). The following represen- tation is added to the event list, and the FoeusList and Ids are updated appropriately. 6 [event2] have(tlme(tlmel)) symptom(objectl([drivel]), symptom( [lock 1]),time (tlmel)) s The order in which full noun phrase mentions are added to I, ne FocusList depends on their syntactic function and linear order, For full noun phrases, direct object mentions precede subject mentions followed by all other mentions given in the order in which they occur in the sentence. See [Dahl1986], for details. 6 This version only deals with explicit mentions of time, so for this sen- tence tile time argument is filled in with a gensym that standg for an unknown time period, The current version of FUNDlT uses verb tense and verb seman- tics to derive implicit time arguments. 16 5.3. Sentence 3: Motor is bad. In the third sentence of the sample text, a new entity is mentioned, motor. Like disk drive from sentence 1, motor is a dependent entity. However, the entity it presupposes is not a computer system, but rather, a disk drive. The newly mentioned motor becomes associated with the previously mentioned disk drive. After processing this sentence, the new entity motor3 is added to the FocusList along with the new proposition event3. Now the discourse infor- mation about part-whole relationships contains infor- mation about both dependent entities, namely that motorl is a part of drivel and that drivel is a part of systeml. haspart([drivel], [motor 1]) haspart([systeml], [drivel]) 5.4. Sentence 4: is awp spindle motor. Awp is an abbreviation for an idiom specific to this domain, awaiting part. It has two semantic roles, one of which maps to the sentence subject. The second maps to the direct object, which in this case is the non-specific spindle motor as explained in Section 4.2. The selectlonal restriction that the first semantic role of awp be an engineer causes the refer- ence resolution component to create a new engineer entity because no engineer has been mentioned previ- ously. After processing this sentence, the list of available entities has been incremented by three: id(event, [event4]) id(part,[ 2317]) id(field ^ engineer, [englneer 1]) The new event is represented as follows: [event4] idiomVerb(wait ^ for ^ par t,time(time2)) wait(objectl([engineerl]), object 2 ([_2317]),time(tlme2)) 5.5. Sentence 5: disk drive was up at 11/17-0800 In the emended version of sentence 5 the disk drive is presumed to be the same drive referred to previously, that is, drivel. The seman- tic analysis of sentence 5 is very similar to that of sentence 1. As shown in the following event represen- tation, the predicate expressed by the modifier up is attributed to the theme drivel at the specified time. [eventS] be(tlme([11/17-12361) ) attribute(theme([drivel]), mod(up),tlme( [11/17-123@] )) 5.6. Sentence @: Replaced motor. The sixth sentence is another fragment consist- ing of a verb phrase with no subject. As before, reference resolution tries to find a referent in the current FocusList which is a semantically accept- able subject given the thematic structure of the verb and the domain-specific selectional restrictions asso- ciated with them. The thematic structure of the verb replace includes an agent role to be mapped onto the sentence subject. The only agent in the maintenance domain is a field engineer. Reference resolution finds the previously mentioned engineer created for awp spindle motor, [englneerl]. It does not find an instrument, and since this is not an essential role, this is not a problem. It simply fills it in with another gensym that stands for an unk- nown filler, unknownl. When looking for the referent of a spindle motor to fill the objectl role, it first finds the non-specific spindle motor also mentioned in the awp spindle motor sentence, and a specific referent is found for it. However, this fails the selection restrictions, since although it is a machine part, it is not already asso- ciated with an assembly, so backtracking occurs and the referent instantiation is undone. The next spin- dle motor on the FocusList is the one from spindle motor is bad, ([motorl]). This does pass the selec- tion restrictions since it participates in a haspart relationship. The last semantic role to be filled is the object2 role. Now there is a restriction saying this role must be filled by a machine part of the same type as objectl, which is not already included in an assembly, viz., the non-specific spindle motor. Refer- ence resolution finds a new referent for it, which automatically instantiates the variable in the id term as well. The representation can be decomposed further into the two semantic predicates missing and included, which indicate the current status of the parts with respect to any existing assemblies. The haspart relationships are updated, with the old haspart relationship for [motorl] being removed, and a new haspart relationship for [motor3] being added. The final representation of the text will be passed through a filter so that it can be suitably modified for inclusion in a database. 17 [event6] replace(tlme(tlme3)) cause(agent([englneerl]), use(instrument(unknownl), exchange(ohjectl([motorl]), objeet2([motor2]), tlme(tlme3)))) included(object 2([motor2]),tlme(tlme3 missing(ohj eetl([motor 1]),time(tlme3) Part-Whole Relationships: haspar t([drlvel], [motora]) haspart([systeml], [drlvel]) 6. Conclusion This paper has discussed the communication between syntactic, semantic and pragmatic modules that is necessary for making implicit linguistic information explicit. The key is letting syntax and semantics recognize missing linguistic entities as implicit enti- ties, so that they can be marked as such, and refer- ence resolution can be directed to find specific referents for the entities. Implicit entities may be either empty syntactic constituents in sentence frag- ments or unfilled semantic roles associated with domain-specific verb decompo~'Jitions, in this way the task of making implicit information explicit becomes a subset of the tasks performed by reference resolu- tion. The success of this approach is dependent on the use of syntactic .and semantic categorizations such as ELLIDED and ESSENTIAL which are meaning- ful to reference resolution, and which can guide refer- ence resolution's decision making process. ACKNOWLED GEMENTS We would like to thank Bonnie Webb,er for her very helpful suggestions on exemplifying semantics/pragmatics cooperation. REFERENCES [Dahl1986] Deborah A. Dahl, Focusing and Reference Resolution in PUNDIT, submitted for publi- cation, 1986. [Dowdlng1986] John Dowding and Lynette Hirschman, Dynamic Translation for Rule Pruning in Restriction Grammar, submitted to AAAI- 86, Philadelphia, 1986. [Fillmore1968] C. J. Fillmore, The Case for Case. In Universals in Linguistic Theory, E. Bach and R. T. Harms (ed.), Holt, Rinehart, and Winston, New York, 1968. [Gundel1980] Jeanette K. Gundel, Zero-NP Anaphora in Russian. Chicago Linguistic Society Parasession on Pronouns and Anaphora, 1980. [Hirschman1982] L. Hirschman and K. Puder, Restriction Grammar in Prolog. In Proc. of the First International Logic Programming Confer- ence, M. Van Caneghem (ed.), Association pour la Diffusion et le Developpement de Prolog, Marseilles, 1982, pp. 85-90. [Hirschman1985] L. Hirschman and K. Puder, Restriction Grammar: A Prolog Implementation. In Log- ic Programming and its Applications, D.H.D. Warren and M. VanCaneghem (ed.), 1985. [Hirschman1986] L. Hirschman, Conjunction in Meta- Restriction Grammar. J. of Logic Program- ruing, 1986. [Kameyama1985] Megumi Kameyama, Zero Anaphora: The Case of Japanese, Ph.D. thesis, Stanford University, 1985. [Pahner1983] M. Palmer, Inference Driven Semantic Analysis. In Proceedings of the National Conference on Artificial Intelligence (AAAI-88), Washington, D.C., 1983. [Palmer1981] Martha S. Palmer, A Case for Rule Driven Semantic Processing. Proc. of the 19th ACL Conference, June, 1981. 18 [Palmer1985] Martha S. Palmer, Driving Semantics for a Limited Domain, Ph.D. thesis, University of Edinburgh, 1985. [Pereira1980] F. C. N. Pereira and D. H. D. Warren, Definite Clause Grammars for Language Analysis -- A Survey of the Formalism and a Comparison with Augmented Transition Networks. Artificial Intelligence,13, 1980, pp. 231-278. [Sager1981] N. Sager, Natural Language Information Processing: A Computer Grammar o] En- glish and Its Applications. Addison-Wesley, Reading, Mass., 1981. [Sidner1979] Candace Lee Sidner, Towards a Computa- tional Theory of Definite Anaphora Comprehension in English Discourse, MIT- AI TR-537, Cambridge, M_A, 1979. 19
1986
4
FORUM ON MACHINE TRANSLATION Machine Translation will not Work Martin Kay Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304 PANELIST STATEMENT Large expenditures on fundamental scientific research are usually limited to the hard sciences. It is therefore entirely reasonable to suppose that, if large sums of money are spent on machine translations, it will be with the clear expectation that what is being purchased is principally development and engineering, and that the result will contribute substantially to the solution of some pressing problem. Anyone who accepts large (or small) sums on this under- standing is either technically naive or dangerously cynical. It may certainly be that I. machine translation could provide a valuable framework for fundamental research; 2. texts in highly restricted subsets of natural lan- guage could be devised for particular puposes and texts in translated automatically; 3. computers have an important role to fill in making translations; 4. translations of extremely low quality may be ac- ceptible on occasions. However, 1. the fundamental research is so far from ap- plicability, 2. the language subsets are so restricted, 3. the useful computer technologies are so different from machine translation, 4. the quality of the translations that can be produced of natural texts by automatic means is so low, and 5. the occasions on which those translations could be useful are so rare, that the use of the term in these cases can only result in con- fusion if not deception. A determined attempt was made to bring machine trans- lation to the point of usability in the sixties. It has become fashionable to deride these as "first generation" systems and to refer to what is being done now as belonging to the second or third generation. It should surely be possible for those who think that the newer systems can succeed where the ear- lier ones failed, to point to problems that have been solved since the sixties that are so crucial as substantially to change our assessment of what can be achieved. We know a good deal more about programming techniques and have larger machines to work with; we have more elegant theories of syntax and what modern linguists are pleased to call seman- tics; and there has been some exploratory work on anaphora. But, we still have little idea how to translate into a closely related language like French or German, English sentences containing such words as "he", "she", "it", "not", "and", and "of". Furthermore, such work as has been done on these problems has been studiously ignored by all those currently involved in developing systems. Unfortunately, the sums that are being spent on MT in Europe and Japan are large enough to make virtually in- evitable the production of a second ALPAC report sometime in the next few years. This will inevitably have a devastat- ing effect on the whole field of computational linguistics, everywhere in the world. The report will be the more devas- tating for the fact that much of the money has in fact been spent frivolously, and much of the work has been incom- petent, even by today's limited standards. 268
1986
40
FORUM ON MACHINE TRANSLATION Machine Translation already does Work Margaret King ISSCO 54, rte des Acacias CH-1227 Geneva, Switzerland PANELIST STATEMENT The first difficulty in answering a question like "Does machine translation work is that the question itself is ill- posed. It takes for granted that there is one single thing called machine translation and that everyone is agreed about what it is. But in fact, even a cursory glance at the systems already around, either in regular operational use or under development, will reveal a wide range of different types of systems. If we take first the dimension determined by who/what does most of the work, the machine or the translator or revisor, at one end of the scale are systems where the human does not intervene at all during the process of translation - "batch" systems for convenience here. Even amongst the batch systems there is considerable variety: the degree of pre-editing i~ermitted or required varies greatly, as does the amount of post-editing foreseen. Some systems insist that anything translated by the machine should require no post- editing, and thus (sometimes) reject as unsuitable for machine treatment a part of the text. Others take it for granted that machine translation will normaly be post-edited, just as human translation is normally revised. Some systems aim at giving nothing more than a very rough raw trans- lation, to be used by the human translator only as a starting point for producing his own translation. Some systems re- quire that the document to be translated conform to a restricted syntax, others leave the author relatively free. Next comes a class of systems that one might style "interactive" systems, where the bulk of the work is still done by the machine, but where the system interacts with a human to a greater or lesser degree. Such systems may ask the human, for example, to resolve an ambiguity in the source text, to choose between a set of target language terms, to decide on correct use of prepositions, or any com- bination of the-~e and other similar tasks. Shifting towards the end of the scale where the bulk of the work is done by a human translator aided by a computer system, there are. systems which will automatically insert identified technical terms, or replace a phrase occurring repeatedly in the text by its translation wherever it appears, leaving the rest of the translation to be done by the human translator, systems where the translator as he produces the translation can consult specialist or general dictionaries, ei- ther constructed by the translator himself for the particular needs of the text, or supplied by the system manufacturer. Many -indeed most- such systems are allied with clever text- processing systems specially designed for use by translators. Finally, although perhaps not strictly machine translation systems, but certainly of potentially great practical utility to the working translator, are independent packages, not neces- sarily integrated into a translator's work station type of en- vironment. These include automated terminology banks, dic- tionary look-up facilities, and general tools such as spelling or grammar checkers. In all this, I have quite deliberately omitted consideration of machine translation systems conceived of as primarily research tools, intended to test the validity of a particular theory or to experiment with some new proposal, since I take it that the worry lying behind the original question -and be- hind the moderator's statement- concerns systems which are in some way subject to external evaluation, and which can therefore lead to dissatisfaction. The status of research and experimental systems as valuable research tools seems quite uncontentious. Now, just as machine translation is not a single in- divisible whole, but rather a range of systems sharing only the common characteristic that they are used in one way or another in performing the task of translation, so the need for machine translation is different, depending on the particular characteristics of individual situations. Here, so many factors come into determining what the real need is that I shall not even attempt to give an exhaus- tive list, limiting myself instead to a handful of indicative, but necessarily over-simplified, examples. Take first the ex- ample of a large translation service, translating documents essentially very similar to one another, but in great volume and frequently at very short notice. This is the typical situa- tion in which what is needed is a batch service, producing reasonable quality translation which can if necessary be revised, where the degree of revision to be done depends on the use to which the translated document is to be put. (If the point of the document is to inform its readers in very general terms of what was discussed in a particuler meeeting, per- haps no revision at all is necessary, if it is to serve as the basis of discussion in a subsequent meeting, it may require quite a lot of revision, if it is to serve as the basis of a treaty or an agreement, it should never have been allowed near a machine translation system in the first place, and the trans- lation should be thrown away). In such a situation, an inter- active system, on the other hand, is likely to be unsuitable, since the main problem is the bulk of work to be done, and the translator or revisor is better occupied dealing with those documents unsuitable for machine treatment or revising where necessary than in sitting in front of a screen watching the machine at work. In a different situation, however, where what is required is very high quality translation, and where the volume of translation to be done is a less pressing problem, so that the main concern is in rationalising the translator's work whilst contingently increasing his productivity, an interactive sys- tem may prove to be the ideal choice, especially if the text type is a mixture of repetitive material which it is boring (and time-wasting) to translate manually each time it ap- pears and quite delicate text requiring great care. In yet another situation the major problem may be the typical length of documents, combined with a need for speed and a need for terminological accuracy, so that a single docu- ment is split over a number of translators working indepen- dently, but all must use the same translation for certain terms. Here, the ideal system might well be simply to provide all the translators with access to a clever text-processor from within which they could access easily a common term bank, with all the rest being left to the translator. There is no need to labour the point: different set-ups have different problems to solve, and therefore, whether they know it or not, need different kinds of machine translation systems. Now we can return to the original question: machine translation works when the machine translation system is able to resolve in a significant measure the particuler trans- 269 lation problems in a particular situation. To put this more crudely, no-one should try to persuade the translator of Faust that a batch trunslation system will do him any good at all, and no-one should try to persuade the translation ser- vice that churns out several hundred invitations to meetings every day that an automated dictionary look-up facility will solve their problems. Once this is realized, the puzzle contained in people as- king questions like whether it is a good idea to work on machine translation in a world where it is demonstrably the case that machine translation systems exist and are counted satisfactory by their users begins to go away. The succesful systems are those where what is provided by the system matches what is required to solve the real problem, where the system developers realistically assessed what they could offer, went ahead and provided that, and where those who commissioned the construction or purchase of a system had expectations matched by what was actually delivered. A final question to those who claim that it is somehow dangerous or irresponsible to promise to produce a machine translation system. If one promises and fails (apart of course from the general principle that one should always try to fulfil one's promises and not to promise what one cannot deliver), why is that more damaging to the field than working on speech-recognition and failing? 270
1986
41
Semantic Acquisition In TELI: A Transportable, User-Customized Natural Language Processor Bruce W. Ballard Douglas E. Stumberger AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 Abstract We discuss ways of allowing the users of a natural language processor to define, examine, and modify the definitions of any domain-specific words or phrases known to the system. An implementation of this work forms a critical portion of the knowledge acquisition component of our Transportable English-Language Interface (TELl), which answers English questions about tabular (first normal-form) data files and runs on a Symbolics Lisp Machine. However, our techniques enable the design of customization modules that are largely independent of the syntactic and retrieval components of the specific system they supply information to. In addition to its obvious practical value, this area of research is important because it requires careful attention to the formalisms used by a natural language system and to the interactions among the modules based on those formalisms. 1. Introduction In constructing the Transportable English- Language Interface system (TELI). we have sought to respond to problems of both an applied and a scientific nature. Concerning the applied side of computational linguistics, we seek to redress the fact that many natural language prototypes, despite their sophistication and even their robustness, have fallen into disuse because of failures (1) to make known to users exactly what inputs are allowed (e.g. what words and phrases are defined) and (2) to provide capabilities that meet the precise needs of a given user or group of users (e.g. appropriate vocabulary, syntax. and semantics). Since experience has shown that neither users nor svstem designers can predict in advance all the words, phrases, and associated meanings that will arise in accessing a given database (cf. Tennant. 1979). we have sought to make TELl "transportable" in an extreme sense, where customizations may be performed (1) by end users, as opposed to the system designers, and (2) at any time during the processing of English sentences, rather than requiring a complete customization before English processing may occur. In addition to the potential practical benefits of a user-customized interface, we feel that well- conceived transportability projects can make useful scientific contributions to computational linguistics since single-domain systems and, to a lesser extent, systems adapted over weeks or months by their designers, afford opportunities to circumvent, rather than squarely address, important issues concerning (a) the precise nature of the formalisms the system is designed around, and (b) the interactions among system modules. Although customization efforts offer no guarantee against ad-hoc design or sloppy implementation, problems of the type mentioned above are less likely to go unnoticed when dealing with a system whose domain-specific information is supplied at run-time, especially when that information is being provided by the actual users of the system. By way of overview, we note that the TELI system derives from previous work on the LDC project, as documented in Ballard (1982), Ballard (1984), Ballard, Lusth and Tinkham (1984). and Ballard and Tinkham (1984). The initial prototype of TELI. which runs on a Symbolics Lisp Machine, is designed to answer English questions about information stored in one or more tables, (i.e. first- normal-form relational database). A sample view of the display screen during a session with TELl. which may give the flavor of how the system operates, is shown in Figure L Information on some aspects of knowledge acquisition not discussed in this paper. particularly with regard to syntactic case frames, can be found in Ballard (1986). 2. Types of Modifiers Available in TELI The syntactic and semantic models adopted for TEL1 are intended to provide a unified treatment of a broad and extendible class of word and phrase types. By providing for an "extendible" class of constructs, we make the knowledge acquisition module of TELl independent of the natural language portion of the system, whose earlier version has been described in Ballard and Tinkham (1984) and Ballard. Lusth. and 20 Tinkham (1984), In the remainder of this paper, the reader should bear in mind that the acquisition modules of TEL1, including the menus they generate, are driven by extensible data structures that convey the linguistic coverage of the underlying natural language processor (NLP) for which information is being acquired. For example, incorporating adjective phrases into the system involved adding 12 lines of Lisp-like data specifications. This brevity is largely due to the use of case frames that embody dynamically alterable selectional restrictions (Ballard, 1986). As an initial feeling for the coverage of the NLP for which information is currently acquired, TEL1 provides semantics for the word categories Adjective e.g. an expensive restaurant Noun Modifier e.g. a graduate student Noun e.g. a pub and the phrase types Adjective Phrase e.g. employees responsible for the planning projects Noun-Modifier Phrase e.g. the speech researchers Prepositional Phrase e.g. the trails on the Franconia-Region map Verb Phrase e.g. employees that report to Brachman Functional Noun Phrase e.g. the size of department 11387, the colleagues of Litman In addition to these user-defined modifier types, the system currently provides for negation, comparative and superlative forms of adjectives, possessives, and ordinals. Among the grammatical features supported are passives for verbs, reduced relatives for prepositional and adjective phrases, fronting of verb phrase complements, and other minor features. One important area for expansion involves quantifiers. both logical (e.g. "all") and numerical (e.g. "at least 3"). 3. Principles Behind Semantic Acquisition As noted above, our goal is to devise techniques that enable end users of a natural language processor to furnish all domain-specific information to by the system. This information includes (1) the vocabulary needed for the data at hand; (2) various types of selectional restrictions that define acceptable phrase attachments; and most critically (3) the definitions of words and phrases. With this in mind, the primary criteria which the semantic acquisition component of TELI has been designed around are as follows. To allow users to define, examine or modify domain- specific il(ormation at any time. This derives from our beliefs that the needs of a user or group of users cannot all be predicted in advance, and will probably change once the system has begun operation. To enable users to impart new concepts to the system. We provide more than just synonym and paraphrase capabilities and, in fact. definitions may be arbitrarily complex, by being defined either (a) in terms of other definitions, which may be defined upon other definitions, or (b) as the conjuction of an arbitrary number of constraints. English Input: .hich trails that aren't long lead to a mountain on £ranconia ridge Internal Representation: (TRAIL (VERBINFO (TRAIL LEAD NIL NIL TO MOUNTAIN) (SUBJ ?) (ARG (MOUNTAIN (QURNT = NIL) (NOT (RDJ LONG))) Algebra Querg: (SELECT trails(trail length-Km) (and (< length-km 67 Answer: (PREPINFO (MOUNTAIN ON TRAIL) (SUBJ ?) (ARG (TRAIL (= FRANCONIA-RIDGE))))))) (= trail (SELECT (TJOIN trails(trail) = mtn-trails(trail))(trail) (= name (SELECT (TJOIN mountains(name) = mtn-trails(name))(name) (= trail 'franconia-ridge))))))) (TRAILS) TRAIL - LENGTH-KM OLD-BRIDLE-PATH 4.1 LIBERTY-SPRING 4.7 What's Your Pleasure? ,qrl:S~,ver a QuE:st~Jotl Edit the Last Input Print Parse Tree . . . . . . . . . . . . . . . . . . . . . . Run Pieces of the NLP . . . . . . . . . . . . . . . . . . . . . . Exit. . . . . . . . . . . . . . . . . . . . . . . Begin a &~ston-dzatior~ UocabulaG 5ynta:,., Sernavftic:s General Info Clear 5,:reol Edit Global Flags 5ave/Pet.rieve Session Figure 1: Sample Display Screen; Top-Level Menu of TEL1 21 To provide definition capabilities independent of modifier type. In our system, adjectives, nouns, prepositional phrases, verb phrases, and so forth are all defined in precisely the same way. This is achieved in part by treating all modifiers as n-place predicates. To allow definitions to be given at various conceptual levels. Users are able to specify meanings (a) in English; (b) in terms of the meanings of previously defined words or phrases; (c) by reference to "conceptual" relationships, which have been abstracted to a level above that of the physical data files; or (d) in terms of database columns. We strive to minimize the need for low-level database references, since this helps (1) to avoid tedious and redundant references, and (2) to assure that most of our techniques will be applicable beyond the current conventional database setting. To provide alternate modalities of specification. For example, the menu scheme described in Section 7.2 offers the user more assistance in making definitions. but is less powerful, than the alternative English and English-like methods described in Section 7.3. We prefer to let users decide when each modality is appropriate, rather than force a compromise among simplicity, reliability, and power. To enable the system to proride help or guidance to the customizer. When defining a modifier, users may view all current modifiers of. or functions associated with, the object type(s) in question. Many other opportunities exist for co-operation on the part of the system. To avoid unnecessary limitations, however, users are generally able to override any hints made by the system. 4. Semantic Processing in TELI The semantic model developed for TELI, in which definitions are acquired from users, assumes that (1) modifier meanings will be purely extensional, and can thus be treated as n-place predicates, and (2) semantic analysis will be almost entirely compositional. Concerning the latter assumption, we note that (a) some important disambiguations, including problems of word sense, will have been made during parsing by reference to selectional restrictions (Ballard and Tinkham, 1984), and (b) minimal re-ordering does occur in converting parse trees into internal representations. 4.1 Types of Semantics All user-defined semantics, however acquired, are stored in a global Lisp structure indexed by the word or phrase being defined. Single-word modifiers are indexed by the word being defined, its part of speech, and the entity it modifies; phrasal modifiers are indexed by the phrase type and the associated case frame. For example, the internal references (new adj room) (prep-ph (restaurant in county)) respectively index the definitions of "new", when used as an adjective modifier of rooms, and "in", as it relates restaurants to counties. As suggested by this indexing scheme, word meanings arise only in the context of their occurrence, never in isolation. Thus, "new room" and "restaurant in county" receive definitions, not "new" or "in". This decision lends generality to the definitional scheme, and any additional effort thereby needed to make multiple definitions is minimized by the provisions for borrowed meanings, as described in Section 7.4. Although our representation strategies allow for definitions that involve relatively elaborate traversals of the physical data files. TELI does not presently provide for arithmetic computations. Thus, the input "Which restaurants are within 3 blocks of China Gardens?" requires a 2-place "distance" function and, unless the underlying data files provide distances between restaurants (there are N-squared such distances to account for). the necessary semantics cannot be supplied. 4.2 Internal Representations As an example of the "internal representation" (IR) of an input, which results from a recursive traversal of a completed parse tree, and which illustrates preparations for compositional analysis, the (artificially complex) input "Which Mexican restaurants in the largest city other than New Providence that are not expensive are open for lunch'?" will have [roughly] the internal representation (restaurant (not (adj expensive)) (nounmod-ph (food restaurant) (nounmod (food (= Mexican))) (head ?)) (prep-ph (restaurant in city) (subj ?) (arg (city (super large) (!= New-Providence)))) (adj-ph (restaurant open for meal) (subj ?) (arg (meal (= lunch))))) This top-level interpretation of the input instructs the system to find all restaurants that satisfy (a) the negation of the 1-place predicate associated with "expensive", and (b) the three 2-place predicates associated with the noun-noun, prepositional, and adjective phrases. Note that modifiers associated with 22 phrasal modifiers are referenced by their case frame, e.g. "restaurant in city". Within the scope of these references, case labels (e.g. "subj" and "arg") indicate which slots have been instantiated and which slot has been relativized, the latter denoted by "?". The list of slot names associated with each phrase type is stored globally. In most instances, the argument of a case slot can be an arbitrary IR structure, in keeping with the recursive nature of the English inputs being recognized. Since IR structures are built around the word and phrase types of the English being dealt with, and since the meanings of words and phrases are stored globally, IR structures should not be regarded as a "knowledge representation" in the sense of KL-ONE, logical form. and so forth. Systems similar in goals to TELI but which revolve around logical form include TEAM (Grosz, 1983; Grosz, Appelt. Martin, and Pereira 1985), IRUS (Bates and Bobrow, 1983; Bates, Moser, and Stallard 1984), and TQA (Plath, 1976; Damerau, 1985). One system similar to TELI in building intermediate structures that contain references to language-specific concepts is DATALOG (Hafner and Godden. 1985). 5. The Initial Phase of Customization When a user asks TEL1 to begin learning about a new domain, the system spends from five to thirty minutes, depending on the complexity of the application, obtaining basic information about each table in the the database (see Figure 2). Users are first asked to give the key column of the table. This information is used primarily to guide the system in inferring the semantics of certain noun-noun and "of"- based patterns. Next, users are asked which columns contain entity values as opposed to property values. Typical properties are "size", "color", and "length", which differ from entities in that (a) their values do not appear as an argument to arbitrary verbs and prepositions (e.g. other than "have", "with", etc.) and (b) they will not themselves have properties associated with them. Finally, users are asked to specify the type of value each column contains. This information allows subsequent references to concepts (e.g. "color") rather than physical column names. It also aids the system in forming subsequent suggestions to the user (e.g. defaults that can be overridden). Having obtained the information above, the system constructs definitions simple questions to be answered, such as indicated that allow "What is Sally's social security number?" "What is the age of John" Along with information freely volunteered by the user, these definitions can be subsequently examined or changed at the user's request. STUDENT-INFO - STUDENT - BILL DOUG FRED JOHN SALLY SUE TERESA SSN CLASS ADVIS 123-45-67891 I BBLLRRD 111-22-3333 3 LITMAN 321-54-9876 3 MARCUS 555-33-1234 2 JONES 314-15-9265 4 BRACH~RN 987-65-4321 3 BRCHENKO 333-22-4444 G BORGIDR Which is the "key" column of STUDENT-INFO? 5TLIDENT (BILL, [IOIJI5 .... ) --~ ......... ~" 1~_,-4J-b,.',_.,9 ..... ) SSN (111-~-~:,_,D, .:.o .~ -~e- CLASS (1, 2, .., ~DUI5 (BACHENKO, B,~LL,~RD ...) Columns of STUDEHT-IHFO Entity PrToperty STUDENT (BILL .... ) 1~ SSH (111-22-3333 .... ) [] [] CLASS (1 .... ) [] [] ADUIS (BBCHENKO .... ) [] [] Return [] I Entity Type for STUDEHT (BILL. DOUG .... ) I ~tuder, t I I Entity Type for BDUIS (BRCHE,KO, BBLLBRD .... )1 instructorl I Figure 2: Initial Acquisitions Based upon the ans~crs to the questions described above, a small number of follow-up questions, mostly unrelated to the subject of this paper, will be asked. For example, the system will propose its best guess as to the morphological variants of nouns, verbs, and other words for the user to confirm or correct. 6. Intermediate Customizations Having learned about each physical relation. TELI asks for information which, though not needed immediately, is either (a) more simply obtained at the outset, in a context relevant to its semantics, than at a later, arbitrary point, or (b) acquirable collectivelv. thus preventing several subequent acquisitions. Unlike the initial acquisitions described in Section 5, intermediate customizations could be excised from the system without any loss in processing ability. We now summarize three forms of intermediate customizations, the last of which may be requested by the user at any time. Allowing users to ask for the other forms as well would be a simple matter. First, the system will ask which columns contain values that either correspond to or are themselves English modifiers. In Figure 2-a, the values 'T' through "G" in the "class" column might correspond (respectively) to "freshman" through "graduate student", in which case acquisitions might continue as 23 suggested in Figure 3. From this information, the system constructs a definition for each user-defined modifier; for example the internal definition of "sophomore" will be ((sophomore noun student) ((class p-noun) = 2)) A second intermediate acquisition, carried out subject to user confirmation, involves the acceptability of hypothesized syntax and semantics for (a) phrases based on "of", (b) phrases built around "have", "with". and "in", and (c) noun-noun phrases. In deciding what case frames to propose. TELI considers the information it has already acquired about simple functional ("of") relationships. A third form of intermediate acquisition involves the system's invitation for the user to give lexical and syntactic information for one or more user-defined categories, namely titles, adjectives. common nouns, noun modifiers, prepositions, and verbs. For example, the user might specify six adjectives and the entities they modify, followed by four or five verbs and their associated case frames. and so forth. 7. On-Line Customization In general, definitions are supplied to TELl whenever (a) an undefined modifier is encountered during the processing of an English input, or (b) the user asks to supply or modify a definition. In each case, the same methods are available for making definitions, and are independent of the modifier type being defined. When creating or modifying a meaning, users are presented with information as shown in Figure 4-a; upon asking to "add a constraint", they are given the menu shown in Figure 4-b. Multiple "constraints" appearring in a semantic specification are presently presumed to be conjoined. I Nh, ich, columns contain (er, coded) Engli:mh ~,3r,ds? SIUDEMT (BILL, DOUG, ...) = [] SSM (iii-22-3333, 12:3-4J-6789, ...) [] CLASS (i, 2 .... ) [] RDVIS (BACHEMK0, BALLARD, ...) [] Abort [] Return [] l Uords associated with the CLASS value 1: fre~hmar, I Uords a s s o c i a t e d with the CLASS v a l u e G: 9raduatel Modif'iers in CLASS Ad,iectlve Mounmod Houn FRESHMRH (i) [] [] [] SOPHOMORE (2) [] [] [] JUMIOR (3) [] [] [] SEMIOR (4) 0 [] [] GRADUATE (g) [] [] [] Return [] Figure 3: [l)termediate Acquisitions Semantic Specification Adjective: FILE is LARGE [ Sample Usa.qe: Sa.qe is LARGE ] the LENGTH of Sage ::-- 380 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (~dd a constraint) ================================== [ retur'n ] Define tile semantics of Verb Phrase: TRAIL LEADS TO MOUNTAIIq by ... Henu Selection En91istn(lik:e) Re:fercnce: Database Refet'ences gorr0vAng from an E×istin9 l'leardrbg ======================================= [ ret.urn ] Figure 4: Top-Level Semantics Menus As suggested in Figure 4-a and below, definitions are made in terms of sample values, which the system treats as formal parameters. In this way we avoid the problem of defining a phrase two or more of whose case slots may be filled by the same type of entity (cf. "a student is a classmate of a student if ..."). To assure that any domain value may appear as a constant, the user is able to alter the system's choice of sample names at any time. 7.1 Specification at the Database Level As noted in Section 3, semantic specifications at the database level are primitive but useful. As shown in Figure 5, a database level specification comprises (a) a relation, possibly arrived at via a user-defined join, and (b) references to columns that correspond to the parameters of the phrase whose semantics is being defined. In many cases, the system can utilize its column type information, acquired as described in Section 5, to predict both the relation to be used (or pair of relations for joining) and the appropriate columns to join over, in which case the menu(s) that are presented will contain boldface selections for the user to confirm or alter. 7.2 Specification by Menu In our previous experience with LDC, we found that a large variety of meanings could be defined by a predicate in which the result of some function is compared using some relational operator to a specified benchmark value. In TELl. we provide an enhancement to this scheme where definitions (a) may involve more than one argument. (b) may contain more than one function reference, and (c) are acquired in menu form. The current internal representation of a menu specification is a triple of the form suggested by 24 Which relation gives tile meanin(j of HEIGHT of MOUNTAIN HOUNT,qlNS: N,ql,iE, ELEL,,',qTION, PIAP-"~-~ C,qI,1PSITES: SITE, C,qP,qCITY, TYPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [Join TI.~'O Relations] = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = [ ret, urn ] To find the HEIGHT of a MOUHTRIM: + Which ,:olumr~ 9ires MOUMTFIIH: NAME ELEVATION MAP Which column 9i'v'e:5 HEIGHT: NAME ELEVATION MAP MOUHTAIMS: [tFII'IE (NASHIHGTOM, ADAMS, ...) ELEUFITIOM (1917, 1768, ...) MAP ( 6, 6 .... ) E>:it [] Figure 5: Database Specification <spec> --> <term> <relop> <term> <term> --> <atom> ] <func> ( <atom> ) <atom> --> <constant> I <parameter> <relop> --> = I<[< = I>1>--I-= An example of how menu semantics operates is given in Figure 6. When a semantics menu first appears, its "Function" field contains a list of all functions known to apply to at least one of the entities that the definition relates to. This reduces the number of keystrokes required from the user and. more importantly, helps guard against an inadvertent proliferation of concept names. 7.3 English and English-Like Specifications In addition, to the database and menu schemes just described, users may supply definitions in terms of English already known to the system. Some advantages to this are that (1) definitions may be arbitrarily complex, limited only by the coverage of the underlying syntactic component, and (2) users will implicitly be learning to supply semantics at the same time they learn to use the NLP itself. Some disadvantages are (1) a user might want to define something that cannot be paraphrased within the bounds of the grammatical coverage of the system, and (2) unless optimizations are carried out, references to user-defined concepts may entail inefficient processing. An alternative to English specification, which functions similarly from the user's standpoint, is to provide for "English-like" specifications in which an expression supplied by the user is translated by some pattern-matching algorithm different from. and probably less sophisticated than. the process involved in actual English parsing. The primary advantage of English-like specification, over English specification, is that translations into internal form can be more efficient, since definitions or parts of definitions will be handled on a case by case basis. One probable disadvantage is that the scheme will be less general, in terms of definable concetps, and perhaps "spotty" in terms of what it makes available. In TELI, both English and English-like specification are done in terms of sample domain values, which are treated as formal parameters. An example appears in Figure 7. In the current implementation, English-like specifications include (a) any definition definable by menu, and (b) definitions that involve (possibly negated) adjective or noun references. As of this writing, only English specifications that involve no nested parameter references can be processed. 7.4 Specification by Borrowing In addition to whatever mechanisms an NL system specifically provides for semantic acquisitions, it is reasonable to allow users to define one meaning directly in terms of another (in addition to indirect dependence, as in the case of English specification). In TELI, users may ask to "borrow" from an existing meaning at any time. As shown in Figure 8, the system responds by finding all current items defined in terms of all or some of the parameters (i.e. entities) of the item for which the borrowing is being done. This assures that the entire borrowed meaning can be modified to apply to the item being defined. After being copied, a borrowed meaning may be edited just as though it had been entered from scratch. Adjective: FILE is LFIRGE [ Sample Usage: Sage is LFIRGE ] Function: CREATION-DATE LEN6TH OWNER (none) other: MIL Rr9ument: Sage other: MIL Relation: != < <= > >= Function: CREATION-DATE LENGTH OWNER (none) other: MIL Flrgu~ent: 300 Sage other: HIL Retain this d e f i n i t i o n : Yes No E.-:. i t [ ] Figure 6: Menu Specification tk, e height of adams is 9rearer thar, 4B001 Adjective: MOUNTAIN i s TALL [ SaBple Usage: Rdans i s IRLL ] I y p e an E n g l i s h ( l i k e ) Reference Figure 7: English-like Specification 25 Is tile meaning of STUDENT is ADVANCED related to one of the followin.q? STUDENT is a FRESHH,qN STUDENT is a 6R,qDU,qTE STUDENT is a C,R,@UATE STUDENT STUDENT is a JUNIOR STUDENT i:s a SENIOR STUDENT is a SOPHOPIORE STUDENT is an UNDERC, Rf~DU,qTE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CLflSS of STUDENT [ return] Figure 8: Borrowing a Meaning 8. Relation to Similarly Motivated Systems At the most abstract level, our approach to transportability is unusual in that we have begun by building a moderately sophisticated NLP'which, from the outset, fundamentally includes replete customization facilities. This contrasts with other efforts which have first built, perhaps over a period of several years, a highly sophisticated system, then sought to incorporate some customization features. Our work is also distinctive, though perhaps less so. in seeking to allow for customization by end users, as opposed to (say) a database administrator (cf. Thompson and Thompson, 1975, 1983, 1985; Johnson, 1985). Some of the systems which, like TEL1, seek to provide for user customization within the context of database query are ASK (Thompson and Thompson 1983, 1985). formerly REL (Thompson and Thompson, 1975). from Caltech; INTELLECT, formerly Robot (Harris, 1977), marketed by Artificial Intelligence Corporation; IRUS (Bates and Bobrow, 1983; Bates. Moser, and Stallard 1984), from BBN Laboratories; TQA (Damerau, 1985). formerly REQUEST (Plath, 1976), from IBM Yorktown Heights; TEAM (Grosz. 1983; Grosz et al, 1985). from SRI International; and USL (Lehmann, 1978), from IBM Heidleberg. Other high-quality domain-independent systems include DATALOG (Hafner and Godden. 1985). from General Motors Research Labs; HAM-ANS (Wahlster. 1984), from the University of Hamburg; and PHLIQA (Bronnenberg et al, 1978-1979). from Philips Research. We now provide a comparison of TELI's customization strategies with those of the TEAM, IRUS, TQA, and ASK systems (other comparisons would also have been instructive, time and space permitting). Although we have recently spoken with at least one designer of each of these systems (see the Acknowledgements), it is possible that, in addition to intended simplifications, we may have overlooked or misunderstood certain significant, perhaps undocumented, features, in which case we apologize to the reader. Also, we note that our remarks are principally concerned with the goals and the approaches of various projects, and should not be viewed as commenting on the accomplishments or overall quality of TELl or any other system. 8.1 A Comparison with TEAM Both TEAM and TELI represent English- language interfaces that have been applied to several moderately complex relational database domains. Each system provides for a variety of customizations by non-natural language experts, though neither system has claimed success with actual users in either customization or English processing mode. In terms of method, each system obtains (among other things) information about each column of each relation (table) of the database. We proceed to point out some of the more significant differences between the projects, as suggested by Grosz et al (1985) and indicated by Martin (1986). To begin with, TEAM incorporates a more powerful natural language processor than does TELl, with provisions for quantifiers, simple pronouns, elaborate comparative forms, limited forms of conjunction, and numerous smaller features. Its "sort hierarchy" provides a taxonomy more general than that of TELI. It also incorporates disambiguation heuristics which seek to obviate the need for users to provide definitions for some phrase types (e.g. prepositional phrases based on "on", "from", "with", and "in"), and its preparations to deal with time and place references are without counterpart in TELI. On the other hand, the customization features of TELl appear to offer greater sophistication, and sometimes more power, than the respective customization features of TEAM. In terms of sophistication, TELI always offers multiple ways of acquiring information, provides the ability to examine and borrow existing definitions, and is able to invoke the appropriate knowledge acquisition module when missing lexical, syntactic, or semantic information is required. Copncerning definitional power, TELl generally provides for more complex definitions of words and phrases than does TEAM, as described in Sections 5-7. For example, whereas the SRI system typically requires a verb to map into some explicit or virtual relation (e.g. a join of explicit relations), TELl also allows an arbitrary number of properties of objects to be used in definitions (e.g. an old employee is one hired before I980. or an employee admires a manager that works more hours than she does). In TEAM, "acquisition is centered around the relations and fields in the database". In contrast, TELI provides several customization modes, as described in Section 3, and discourages low-level database specifications. 26 In contrast to the principles we espoused for TELI in Section 3, TEAM couples its methods of acquisition with the type of modifier being defined. For example, when seeing a "feature field", which contains exactly two distinct values, the system asks for "positive adjectives" and "negative adjectives" associated with these values (e.g. "volcanic" is a positive adjective associated with the database value "Y"). In TEL1, these relationships arise as a special case of the acquisitions shown in Figures 3.6, and 7b. An interesting similarity between TEAM and TELI is that each provides for English(like) definitions. For example. TEAM might be told that "a volcano erupts", from which it infers that a mountain erupts just in case it is a volcano. 8.2 A Comparison with IRUS Another recently developed facilitiy to allow user customizations of a database front-end is represented by the IRACQ component of the IRUS system (Ayuso and Weischedel, 1986). In addition to its practical value, IRACQ is intended as a vehicle that permits experimental work with sophisticated knowledge representation formalisms. IRACQ is similar to TELI in shielding the user from the layout of the underlying data files. Another similarity is that each system accepts case frame specifications in English-like form. but IRACQ allows proper nouns as well as common nouns to be used. Thus. a user might suggest the case frame of the verb "write" by saying "Jones wrote some articles". Since IRUS provides for quite general taxonomic relationships among defined concepts (e.g. nouns), IRACQ proceeds to ascertain which of the possibly several classes that "Jones" belongs to is the most general one that can act as the subject of "write". One important difference between TELI and IRACQ is that IRUS distinguishes conceptual information, which resides within its KR framework, from the linguistic information that characterizes the English to be used. Thus, while IRACQ supports definitions in terms of an arbitrary number of predicates, as does TELl, it assumes that any concepts needed to define a new language item have already been specified. These representations, acquired by a separate module called KREME, involve the KL-ONE notions of "concept" and "relation", which are similar to, but more sophisticated than, the 1- and 2-place predicates that come into existence during a session with TELI. At present, IRACQ allows users to define case frame information for verb phrases, prepositional phrases, and noun phrases involving "of". Its treatment of prepositional phrases is very much like that of TELI in that the head noun being modified is considered part of the the noun-preposition-noun triple for which a definition is beine acquired (cf. Section 4,1). Definitions for individual words (e.g. nouns and adjectives) are not supported but are being considered for future versions of the system, as are facilities that enable the system to inform the user of existing predicates that might be useful in defining a new language item. This facility will be similar in spirit to TELI's provisions for "borrowing" definitions. as described in Section 7.4. 8.3 A Comparison with TQA Unlike most efforts at transportability, TQA has been designed as a working prototype, capable of being customizated for complex database applications by actual users. The primary responsibility of the customization module is to acquire information that relates language concepts, e.g. subject of a given verb, to the columns of the database at hand. Like TELI, TQA avoids having to copy all database values into the lexicon by constructing "shape" information to recognize numbers and similar patterns. For example, the system might deduce that all database values referring to a department are of the form "letter followed by two digits", which allows for valuable disambiguations during parsing. Thus, in a database where employees manage projects and supervisors manage departments, the question "Who manages K34?" can be understood to be asking about supervisors without having to find "K34" in either the lexicon or the database. A related problem, which TQA addresses more squarely than most systems (including TELI), concerns the appearance and possible equivalence of database values. For example. "vac lnd" might indicate "vacant land", "grn" and "green" might be used interchangeable, and so forth. Many practical applications require that these sorts of issues be addressed in order for a user to obtain reliable information. Another useful feature concerns the acquisition of information that enables non-trivial output formatting. In simple cases, a database administrator might want nine-digit values appearing in columns associated with social security numbers to be printed with dashes at the appropriate points (e.g. 123456789 becomes 123-45-6789), In more complicated situations, values might actually need to be decoded, so that 0910 becomes "vacant land". This provision for decoding is similar to to the form of intermediate acquisition shown in Figure 3, though here it is being used for opposite effect. 27 8.4 A Comparison with ASK The current ASK prototypes, which run on Sun, Vax, and HP desktop systems, are derived from earlier work on the REL system, which itself derives from work on the DEACON project, which stems from the early 1960's. Unlike most recent efforts, which have sought to incorporate customization features into an existing more-or-less single-domain system, the work with REL, the "Rapidly Extensible Language", fundamentally included definitional capabilities as early as 1969. To begin with, ASK provides quite general customization facilities, allowing English definitions at least as sophisiticated as those outlined in Section 7.3. An example is "ships 'carry' coal to Oslo if there is a shipment whose carrier is ships, type is coal and destination is Oslo". Arithmetic facilities are also provided, e.g. "area equals length times beam". The most distinguishing features of ASK, however, derive from the designers' desire to incorporate natural language technology into an intergrated information management system, rather than provide simple sentence-by-sentence database retrieval. One feature allows ASK to be connected to several external database systems, drawing information from each of them in the context of answering a user's question. A second feature allows a user to provide bulk data input. This begins with the interactive specification of a record type, followed by information used to populate the newly created relation. Acknowledgements The current TELI system derives from work on the LDC project, which was carried out at Duke University by John Lusth and Nancy Tinkham. In converting the NL portions of LDC to operate in our present context, we have engaged in frequent discussions with several persons, including Joan Bachenko, Alan Biermann, Marcia Derr, George Heidorn, Mark Jones. and Mitch Marcus. We also wish to thank Paul Martin of SRI, Damaris Ayuso and Ralph Weischedel of BBN, Fred Damerau of IBM Yorktown Heights, and Fred Thompson of Caltech, for their willingness to answer a number of questions that helped us to formulate the comparisons given in Section 8. Finally, we wish to thank Marcia Derr for many useful comments on a draft of our paper. References Ayuso, D. and Weischedel, R. Personal Communication, April 1986. Ballard, B. "A 'Domain Class' ApprOach to Transportable Natural Language Processing", Cognition and Brain Theory 5, 3 (1982), 269-287. Ballard, B. "The Syntax and Semantics of User- Defined Modifiers in a Transportable Natural Language Processor", Proc. Coling-84, Stanford University, July 1984, 52-56. Ballard, B. "User Specification of Syntactic Case Frames in TELI, A Transportable, User-Customized Natural Language Processor", Proc. Coling-86, Bonn, West Germany, August, 1986. Ballard, B., Lusth, J., and Tinkham, N. "LDC-I: A Transportable Natural Language Processor for Office Environments", ACM Transactions on Office Information Systems 2, 1 (1984), 1-23. Ballard, B. and Tinkham, N. "A Phrase-Structured Grammatical Framework for Transportable Natural Language Processing", Computational Linguistics 10, 2 (1984), 81-96. Bates, M. and Bobrow, R. "A Transportable Natural Language Interface for Information Retrieval", Proc. 6th Int. ACM SIGIR Conference, Washington, D.C., June 1983. Bates, M., Moser, M. and Stallard, D. "The IRUS Transportable Natural Language Interface", Proc. First Int. Workshop on Expert Database Systems, Kiawah Island, October 1984, 258-274. Bronnenberg, W., Landsbergen, S., Scha, R., Schoenmakers, W. and van Utteren, E. "PHLIQA-1, a Question-Answering System for Data-Base Consultation in Natural English", Philips tech. Rev. 38 (1978-79), 229-239 and 269-284. Damerau,- F. "Problems and Some Solutions in Customization of Natural Language Database Front Ends", ACM Transactions on Office Information Systems 3, 2 (1985), 165-184. Grosz, B. "TEAM: A Transportable Natural Language Interface System", Conf. on Applied Natural Language Processing, Santa Monica, 1983, 39-45. Grosz, B., Appelt, D., Martin, P. and Pereira, F. "TEAM: An Experiment In The Design Of Transportable Natural-Langauge Interfaces", Artificial Intelligence, in press. Hafner, C. and Godden, C. "Portability of Syntax and Semantics in Datalog". ACM Transactions on Office Information Systems 3.2 (1985), 141-164. 28 Harris, L. "User-Oriented Database Query with the ROBOT Natural Language System", Int. Journal of Man-Machine Studies 9 (1977), 697-713. Johnson, T. Natural Language Computing: The Commercial Applications. Ovum Ltd, London. 1985. Lehmann. H. "Interpretation of natural language in an information system", IBM J. Res. Dev. 22, 5 (1978), pp. 560-571. Martin, P. Personal communication, March 1986. Tennant, H. "Experience With the Evaluation of Natural Language Question Answerers", Int. J. Conf. on Artificial Intelligence, 1979, pp. 275-281. Thompson, F. and Thompson, B. "Practical Natural Language Processing: The REL System as Prototype", In Advances in Computers, Vol. 3, M. Rubinoff and M. Yovits, Eds., Academic Press, 1975. Thompson, B. and Thompson, F. "Introducing ASK: A Simple Knowledgeable System", Conf. on Applied Natural Language Processing, Santa Monica, 1983. 17- 24. Thompson, B. and Thompson. F. "ASK Is Transportable in Half a Dozen Ways", ACM Trans. on Office Information Systems 3, 2 (1985), 185-203. Wahlster, W. "User Models in Dialog Systems", Invited talk at Coling-84, Stanford University, July 1984. 29
1986
5
COMPUTATIONAL COMPLEXITY OF CURRENT GPSG THEORY Eric Sven Ristad MIT Artificial Intelligence Lab Thinking Machines Corporation 545 Technology Square and 245 First Street Cambridge, MA 02139 Cambridge, MA 02142 ABSTRACT An important goal of computational linguistics has been to use linguistic theory to guide the construction of computationally efficient real-world natural language processing systems. At first glance, generalized phrase structure grammar (GPSG) appears to be a blessing on two counts. First, the precise formalisms of GPSG might be a direct and fransparent guide for parser design and implementation. Second, since GPSG has weak context-free generative power and context-free languages can be parsed in O(n ~) by a wide range of algorithms, GPSG parsers would ap- pear to run in polynomial time. This widely-assumed GPSG "efficient parsability" result is misleading: here we prove that the universal recognition problem for current GPSG theory is exponential-polynomial time hard, and assuredly intractable. The paper pinpoints sources of complexity (e.g. metarules and the theory of syntactic features) in the current GPSG theory and concludes with some linguistically and computationally mo- tivated restrictions on GPSG. 1 Introduction An important goal of computational linguistics has been to use linguistic theory to guide the construction of computationally efficient real-world natural language processing systems. Gen- eralized Phrase Structure Grammar (GPSG) linguistic theory holds out considerable promise as an aid in this task. The pre- cise formalisms of GPSG offer the prospect of a direct and trans- parent guide for parser design and implementation. Further- more, and more importantly, GPSG's weak context-free gener- ative power suggests an efficiency advantage for GPSG-based parsers. Since context-free languages can be parsed in polyno- mial time, it seems plausible that GPSGs can also be parsed in polynomial time. This would in turn seem to provide "the be- ginnings of an explanation for the obvious, but largely ignored, fact thatlhumans process the utterances they hear very rapidly (Gazdar,198] :155)." 1 In this paper I argue that the expectations of the informal complexity argument from weak context-free generative power are not in fact met. I begin by examining the computational complexity of metarules and the feature system of GPSG and show that these systems can lead to computational intractabil- ~See also Joshi, "Tree Adjoining Grammars ~ p.226, in Natural Language Parsing (1985) ed. by D. Dowty, L. Karttunen, and A. Zwicky, Cambridge University Press: Cambridge, and aExceptlons to the Rule, ~ Science News 128: 314-315. ity. Next I prove that the universal recognition problem for cur- rent GPSG theory is Exp-Poly hard, and assuredly intractable. 2 That is, the problem of determining for an arbitrary GPSG G and input string z whether x is in the language L(G) gener- ated by G, is exponential polynomial time hard. This result puts GPSG-Recognition in a complexity class occupied by few natural problems: GPSG-Recognition is harder than the trav- eling salesman problem, context-sensitive language recognition, or winning the game of Chess on an n x n board. The complex- ity classification shows that the fastest recognition algorithm for GPSGs must take exponential time or worse. One role of a com- putational analysis is to provide formal insights into linguistic theory. To this end, this paper pinpoints sources of complexity in the current GPSG theory and concludes with some linguisti- cally and computationally motivated restrictions. 2 Complexity of GPSG Components A generalized phrase structure grammar contains five language- particular components -- immediate dominance (ID) rules, meta- rules, linear precedence (LP) statements, feature co-occurrence restrictions (FCRs), and feature specification defaults (FSDs) and four universal components -- a theory of syntactic fea- tures, principles of universal feature instantiation, principles of semantic interpretation, and formal relationships among various components of the grammar, s Syntactic categories are partial functions from features to atomic feature values and syntactic categories. They encode subcategorization, agreement, unbounded dependency, and other significant syntactic information. The set K of syntactic cate- gories is inductively specified by listing the set F of features, the set A of atomic feature values, the function po that defines the range of each atomic-valued feature, and a set R of restrictive predicates on categories (FCRs). The set of ID rules obtained by taking the finite closure of the metarules on the ID rules is mapped into local phrase struc- ture trees, subject to principles of universal feature instantia- tion, FSDs, FCRs, and LP statements. Finally, local trees are 2We use the universal problem to more accurately explore the power of a grammatical formalism (see section 3.1 below for support). Ris- tad(1985) has previously proven that the universal recognition problem for the GPSG's of Gazdar(1981) is NP-hard and likely to be intractable, even under severe metarule restrictions. 3This work is based on current GPSG theory as presented in Gazdar et. al. (1985), hereafter GKPS. The reader is urged to consult that work for a formal presentation and thorough exposition of current GPSG theory. 30 assembled to form phrase structure trees, which are terminated by lexical elements. To identify sources of complexity in GPSG theory, we con- sider the isolated complexity of the finite metarule closure Ol>- station and the rule to tree mapping, using the finite closure membership and category membership problems, respectively. Informally, the finite closure membership problem is to deter- mine if an ID rule is in the finite closure of a set of metarules M on a set of ID rules R. The category membership problem is to determine if a category or C or a legal extension of C is in the set K of all categories based the function p and the sets A, F and R. Note that both problems must be solved by any GPSG- based parsing system when computing the ID rule to local tree mapping. The major results are that finite closure membership is NP- hard and category membership is PSPACE-hard. Barton(1985) has previously shown that the recognition problem for ID/LP grammars is NP-hard. The components of GPSG theory are computationally complex, as is the theory as a whole. Assumptions. In the following problem definitions, we allow syntactic categories to be based on arbitrary sets of features and feature values. In actuality, GPSG syntactic categories are based on fixed sets and a fixed function p. As such, the set K of permissible categories is finite, and a large table containing K could, in princip}e, be given. 4 We (uncontroversially) generalize to arbitrary sets and an arbitrary function p to prevent such a solution while preserving GPSG's theory of syntactic features, s No other modifications to the theory are made. An ambiguity in GKPS is how the FCRs actually apply to embedded categories. 6 Following Ivan Sag (personal communi- cation), I make the natural assumption here that FCRs apply top-level and to embedded categories equally. 4This suggestion is of no practical significance, because the actual num- ber of GPSG syntactic categories is extremely large. The total number of categories, given the 25 atomic features and 4 category-valued features, is: J K = K' I = 32s((1 +32s)C(1 +32s)((1 ÷32~)(1 +32s)~)2)s)" ~_ 32s(1 + 32~) s4 > 3 le2~ > 10 TM See page 10 for details. Many of these categories will be linguistically meaningless, but all GPSGs will generate all of them and then filter some out in consideration of FCRs, FSDs, universal feature instantiation, and the other admissible local trees and lexical entries in the GPSG. While the FCRs in some grammars may reduce the number of categories, FCRs are a language-particular component of the grammar. The vast number of categories cited above is inherent in the GPSG framework. SOur goal is to identify sources of complexity in GPSG theory. The gen- eralization to arbitrary sets allows a fine-grained study of one component of GPSG theory (the theory of syntactic features) with the tools of compu- tational complexity theory. Similarly, the chess board is uncontroverslally generalized to size n × a in order to study the computational complexity of chess. eA category C that is defined for a feature ], f E (F - Atom) n DON(C) (e.g. f = SLASH ), contains an embedded category C~, where C(f) --- C~. GKPS does not explain whether FCR's must be true of C~ as well as C. 2.1 Metarules The complete set of ID rules in a GPSG is the maximal set that can be arrived at by taking each metarule and applying it to the set of rules that have not themselves arisen as a result of the application of that metarule. This maximal set is called the finite closure (FC) of a set R of lexical ID rules under a set M of metarules. The cleanest possible complexity proof for metarule finite closure would fix the GPSG (with the exception of metarules) for a given problem, and then construct metarules dependent on the problem instance that is being reduced. Unfortunately, metarules cannot be cleanly removed from the GPSG system. Metarules take ID rules as input, and produce other ID rules as their output. If we were to separate metarules from their inputs and outputs, there would be nothing left to study. The best complexity proof for metarules, then, would fix the GPSG modulo the metarules and their input. We ensure the input is not inadvertently performing some computation by requiring the one ID rule R allowed in the reduction to be fully specified, with only one 0-1evel category on the left-hand side and one unanalyzable terminal symbol on the right-hand side. Furthermore, no FCRs, FSDs, or principles of universal feature instantiation are allowed to apply. These are exceedingly severe constraints. The ID rules generated by this formal system will be the finite closure of the lone ID rule R under the set M of metarules. The (strict, resp.) finite closure membership problem for GPSG metarules is: Given an ID rule r and sets of metarules M and ID rules R, determine if 3r e such that r I ~ r (r I = r, resp.) and r I • FC(M, R). Theorem 1: Finite Closure Membership is NP-hard Proof: On input 3-CNF formula F of length n using the m vari- ables zl... x,~, reduce 3-SAT, a known NP-complete problem, to Metarule-Membership in polynomial time. The set of ID rules consists of the one ID rule R, whose mother category represents the formula variables and clauses, and a set of metarules M s.t. an extension of the ID rule A is in the finite closure of M over R iff F is satisfiable. The metarules generate possible truth assignments for the formula variables, and then compute the truth value of F in the context of those truth assignments. Let w be the string of formula literals in F, and let wl denote the i th symbol in the string w. 1. The ID rules R,A 31 R: A: where <satisfiable> <satisfiability> F= F--*<satisfiability> [[STAGE 3]]-~<satisfiable> is a terminal symbol is a terminal symbol {[y, 0]:l<i<m} u {lc, o]:I<i< ~} U {[STAGE I]} 2. Construct the metarules (a) m metarules to generate all possible assignments to the variables Vi, 1 < i < m {[yi 0],[STAGE I]} -* W (i) {[Yi I],[STAGE 1]} ~ W (b) one metarule to stop the assignment generation pro- cess {[STAGE 1]) -~ W (2) {[STAGE 2]} --* W (c) I w[ metarules to verify assignments Vi,j,k 1<i<1~ j, l <_ j <_ m, O < k < 2, if wsi-k : xj, then construct the metarule {[yi 1],[ei 0],[STAGE 2]) --+ W (3) {[yj i],[ci 1], [STAGE 2]} --' W Vi,j,k l<i<~ -1, l<_j<_m,O<k<_2, if wsi-k = ~, then construct the metarule {[yj 0], [cl 0], [STAGE 2]} -* W (4) {[yj O],[ci 1],[STAGE 2]}---,W (d) Let the category C = {[ci 1]: 1 < i < l~J}. Con- struct the metarule C[STAGE 2] -~ W {[STAGE 3]} --* <satisfiable> (5) The reduction constructs O(I w l) metarules of size log(I w [), and clearly may be performed in polynomial time: the reduc- tion time is essentially the number of symbols needed to write the GPSG down. Note that the strict finite closure membership problem is also NP-hard. One need only add a polynomial num- ber of metarules to "change" the feature values of the mother node C to some canonical value when C(STAGE ) = 3 -- all 0, for example, with the exception of STAGE . Let F = {[Yi 0] : l<i<m} U {[c, O]:l<i< ~}. Then A would be A : F[STAGE 3] -~ <satisfiable> Q.£.P The major source of intractability is the finite closure opera- tion itself. Informally, each metarule can more than double the number of ID rules, hence by chaining metarules (i.e. by apply- ing the output of a metarule to the input of the next metarule) finite closure can increase the number of ID rules exponentiallyff 2.2 A Theory of Syntactic Features Here we show that the complex feature system employed by GPSG leads to computational intractability. The underlying insight for the following complexity proof is the almost direct equivalence between Alternating Turing Machines (ATMs) and syntactic categories in GPSG. The nodes of an ATM compu- tation correspond to 0-level syntactic categories, and the ATM computation tree corresponds to a full, n-level syntactic cate- gory. The finite feature closure restriction on categories, which limits the depth of category nesting, will limit the depth of the corresponding ATM computation tree. Finite feature clo- sure constrains us to specifying (at most) a polynomially deep, polynomially branching tree in polynomial time. This is ex- actly equivalent to a polynomial time ATM computation, and by Chandra and Stockmeyer(1976), also equivalent to a deter- ministic polynomial space-bounded 'luring Machine computa- tion. As a consequence of the above insight, one would expect the GPSG Category-Membership problem to be PSPACE-hard. The actual proof is considerably simpler when framed as a re- duction from the Quantified Boolean Formula (QBF) problem, a known PSPACE-complete problem. Let a specification of K be the arbitrary sets of features F, atomic features Atom, atomic feature values A, and feature co- occurrence restrictions R and let p be an arbitrary function, all equivalent to those defined in chapter 2 of GKPS. The category membership problem is: Given a category C and a specification of a set K of syntactic categories, determine if3C Is.t. C I~CandC IEK. The QBF problem is {QIF1Qzyz... Qmy,nF(yh YZ,..., y,n) I Qi 6 {V, 3}, where the yi are boolean variables, F is a boolean formula of length n in conjunctive normal form with exactly ~More precisely, the metarule finite closure operation can increase the size of a GPSG G worse than exponentially: from I Gi to O(] G [2~). Given a set of ID rules R of symbol size n, and a set M of m metarule, each of size p, the symbol size of FC(M,R) is O(n z~) = O(IGIZ~). Each met~ule can match the productions in R O(n) different ways, inducing O(n + p) new symbols per match: each metarule can therefore square the ID rule grammar size. There are m metarules, so finite closure can create an ID rule grammar with O(n 2~) symbols. 32 three variables per clause (3-CNF), and the quantified formula is true}. Theorem 2: GPSG Category-Membership is PSPACE-hard Proof: By reduction from QBF. On input formula fl = QlylQ2y2 . . . QmymF(yl, y2, . . . , y,~) we construct an instance P of the Category-Membership problem in polynomial time, such that f~ E QBF if and only if P is true. Consider the QBF as a strictly balanced binary tree, where the i th quantifier Qi represents pairs of subtrees < Tt, T! > such that (1) Tt and T! each immediately dominate pairs of subtrees representing the quantifiers Qi+l ... Qra, and (2) the i th variable yi is true in T~ and false in Tf. All nodes at level i in the whole tree correspond to the quantifier Qi. The leaves of the tree are different instantiations of the formula F, corresponding to the quantifier-determined truth assignments to the m variables. A leaf node is labeled true if the instantiated formula F that it represents is true. An internal node in the tree at level i is labeled true if 1. Qi = "3" and either daughter is labeled true, or 2. Qi -= "V" and both daughters are labeled true. Otherwise, the node is labeled false. Similarly, categories can be_understood as trees, where the features in the domain of a category constitute a node in the tree, and a category C immediately dominates all categories C ~ such that Sf e ((r - Atom) A DON(C))[C(f) = C']. In the QBF reduction, the atomic-valued features are used to represent the m variables, the clauses of F, the quantifier the category represents, and the truth label of the category. The category-valued features represent the quantifiers -- two category-valued features qk,qtk represent the subtree pairs < Tt, T I > for the quantifier Qk. FCRs maintain quantifier-imposed variable truth assignments "down the tree" and calculate the truth labeling of all leaves, according to F, and internal nodes, according to quantifier meaning. Details. Let w be the string of formula literals in F, and w~ denote the i th symbol in the string w. We specify a set K of permissible categories based on A, F, p,.and the set of FCRs R s.t. the category [[LABEL 1]] or an extension of it is an element of K iff ~ is true. First we define the set of possible 0-level categories, which encode the formula F and truth assignments to the formula variables. The feature wi represents the formula literal wi in w, yj represents the variable yj in f2, and ci represents the truth value of the i th clause in F. Atom = {LEVEL ,LABEL } u {w,: 1 < i <lwl} u {y:- : 1 <j< m} u {c~:1<;< ~} F-Atom = {qk,q~ : l < k < m} p°(LEVEL) = {k:l <k< mA-1} po(f) = {0,1} Vf E Atom- {LEVEL } FCR's are included to constrain both the form and content of the guesses: 1. FCR's to create strictly balanced binary trees: Vk, l<k<m, ]LEVEL k] = [qk [[Yk 1][LEVEL k + 1]]]& [ql [[Vk 0][LEVEL k + 1]]] 2. FCR's to ensure all 0-level categories are fully specified: Vi, 1 <i< m [c,] = [w3,-~]&[~3~-l]&[~3,] ]LABEL ] -- = [cl] Vk, 1 <k<m, [LABEL] --= [yk] 3. FCR's to label internal nodes with truth values deter- mined by quantifier meaning: Vk, l<k<rn, if Qk = "V", then include: [LEVEL k]&[LABEL 1] ------ [qk [[LABEL ll]]&[q~ [[LABEL 1]]1 [LEVEL k]&[LABEL O] ----- [qk [[LABEL 0]]] V [q~ [[LABEL 0]]1 otherwise Qk = "3", and include: [LEVEL k]&[LABEL 1] -- [qk [[LABEL 11]] Y [q~ [[LABEL I]]] [LEVEL k]&[LABEL O] -- [qk [[LABEL 0]]]&Iq ~ [[LABEL 0]]] The category-valued features qk and q~ represent the quan- tifier Qk. In the category value of qk, the formula vari- able yk = 1 everywhere, while in the category value of q~, Yk = 0 everywhere. 4. one FCR to guarantee that only satisfiable assignments are permitted: [LEVEL 1] ~ ILABEL 1] 5. FCR's to ensure that quantifier assignments are preserved "down the tree": Vi, k l<_i<k<m, [Yi 1] D [qk [[Yi 1]]]&[q~ [[Yi 1]]] [~, O] ~ [q~ [[y~ o]]]&[q i [[y~ 0]]] 33 6. FCR's to instantiate variable assignments into the formula F: Vi, kl <i<lw[ and 1< k<m, if wi = Yk, then include: [Yk 11 D [w, 11 [~ko] D [~o] else if wi = Y-~, then include: [y,~ :] D [~, o] [~,~, o] D N, 1] 7. FCR's to verify the guessed variable assignments in leaf nodes: Vi l<i<~, It, o] _= [~s,-2 o]~[~,_, o]~[~, o] [ci 1] -- [ws,-~ 1]V[ws,_I 1]V[ws, 1] [LEVEL rn + l]&[c, 0] D [LABEL 0] [LEVEL m+ 1]d~[Cx 1]&:[c2 l]&...&[c~ol/31 ] D [LABEL 11 The reduction constructs O(1~1) features and O(m ~) FCRs of size O(log m) in a simple manner, and consequently may be seen to be polynomial time. 0.~.P The primary source of intractability in the theory of syn- tactic features is the large number of possible syntactic cate- gories (arising from finite feature closure) in combination with the computational power of feature co-occurrence restrictions, s FCRs of the "disjunctive consequence" form [f v] D [fl vl] V ... V [fn vn] compute the direct analogue of Satisfiability: when used in conjunction with other FCRs, the GPSG effectively must try all n feature-value combinations. 3 Complexity of GPSG-Recognition Two isolated membership problems for GPSG's component for- mal devices were considered above in an attempt to isolate sources of complexity in GPSG theory. In this section the recog- nition problem (RP) for GPSG theory as a whole is considered. I begin by arguing that the linguistically and computationally relevant recognition problem is the universal recognition prob- lem, as opposed to the fixed language recognition problem. I then show that the former problem is exponential-polynomial (Exp-Poly) time-hard. SFinite feature closure admits a surprisingly large number of possible categories. Given a specification (F, Atom, A, R, p) of K, let a =lAteral and b =IF - Atom I. Assume that all atomic features are binary: a feature may be +,-, or undefined and there are 3 a 0-1evel categories. The b category- valued features may each assume O(3 ~) possible values in a 1-1evel category, so I/f' I= O(3=(3")b). More generally, IK = K'I- O(3~'~C ~ orr~ - ,= ) = O(3 ~°'' ~C:oo ,~) = O(~*".) = O(3 o.'') where E~=o ~ converges toe ~ 2.7 very rapidly and a,b = O(IGI) ; a = 25, b = 4 in GKPS. The smallest category in K will be 1 symbol (null set), and the largest, maximally-specified, category wilt be of symbol-slze log I K I = oca. b!). 3.1 Defining the Recognition Problem The universal recognition problem is: given a grammar G and input string x, is z C L(G)?. Alternately, the recognition prob- lem for a class of grammars may be defined as the family of questions in one unkown. This fized language recognition prob- lem is: given an input string x, is z E L for some fixed language L?. For the fixed language RP, it does not matter which gram- mar is chosen to generate L -- typically, the fastest grammar is picked. It seems reasonable clear that the universal RP is of greater linguistic and engineering interest than the fixed language RP. The grammars licensed by linguistic theory assign structural descriptions to utterances, which are used to query and update databases, be interpreted semantically, translated into other hu- man languages, and so on. The universal recognition problem -- unlike the fixed language problem -- determines membership with respect to a grammar, and therefore more accurately mod- els the parsing problem, which must use a grammar to assign structural descriptions. The universal RP also bears most directly on issues of nat- ural language acquisition. The language learner evidently pos- sesses a mechanism for selecting grammmars from the class of learnable natural language grammars/~a on the basis of linguis- tic inputs. The more fundamental question for linguistic theory, then, is "what is the recognition complexity of the class /~c?". If this problem should prove computationally intractable, then the (potential) tractability of the problem for each language generated by a G in the class is only a partial answer to the linguistic questions raised. Finally, complexity considerations favor the universal RP. The goal of a complexity analysis is to characterize the amount of computational resources (e.g. time, space) needed to solve the problem in terms of all computationally relevent inputs on some standard machine model (typically, a multi-tape deterministic Turing machine). We know that both input string length and grammar size and structure affect the complexity of the recog- nition problem. Hence, excluding either input from complexity consideration would not advance our understanding. 9 Linguistics and computer science are primarily interested in the universal recognition problem because both disciplines are concerned with the formal power of a family of grammars. Lin- guistic competence and performance must be considered in the larger context of efficient language acquisition, while computa- tional considerations demand that the recognition problem be characterized in terms of both input string and grammar size. Excluding grammar size from complexity consideration in order SThis ~consider all relevant inputs ~ methodology is universally assumed in the formal language and computational complexity literature. For ex- ample, Hopcraft and Ullman(1979:139) define the context-free grammar recognition problem as: "Given a CFG G = (V,T,P, $) and a string z in Y', is x in L(G)?.". Garey and Johnson(1979) is a standard reference work in the field of computational complexity. All 10 automata and language recognition problems covered in the book (pp. 265-271) are universal, i.e. of the form "Given an instance of a machine/grammar and an input, does the machine/grammar accept the input7 ~ The complexity of these recog- nition problems is alt#ays calculated in terms of grammar and input size. 34 to argue that the recognition problem for a family of grammars is tractable is akin to fixing the size of the chess board in order to argue that winning the game of chess is tractable: neither claim advances our scientific understanding of chess or natural language. 3.2 GPSG-Recognition is Exp-Poly hard Theorem 3: GPSG-Recognition is Exp-Poly time-hard Proof 3: By direct simulation of a polynomial space bounded alternating Turing Machine M on input w. Let S(n) be a polynomial in n. Then, on input M, a S(n) space-bounded one tape alternating Turing Machine (ATM), and string w, we construct a GPSG G in polynomial time such that w E L(M) iff $0wllw22...w,~n$n÷l E L(G). By Chandra and Stockmeyer(1976), ASPACE(S(n)) = U DTIM~ cs(")) c:>0 where ASPACE(S(n)) is the class of problems solvable in space Sin ) on an ATM, and DTIME(F(n)) is the class of prob- lems solvable in time F(n) on a deterministic Turing Machine. As a consequence of this result and our following proof, we have the immediate result that GPSG-Recognition is DTIME(cS(n)) - hard, for all constants c, or Exp-Poly time-hard. An alternating Turing Machine is like a nondeterministic TM, except that some subset of its states will be referred to as universal states, and the remainder as existential states. A nondeterministic TM is an alternating TM with no universal states. 10 The nodes of the ATM computation tree are represented by syntactic categories in K ° -- one feature for every tape square, plus three features to encode the ATM tape head positions and the current state. The reduction is limited to specifying a poly- nomial number of features in polynomial time; since these fea- tures are used to encode the ATM tape, the reduction may only specify polynomial space bounded ATM computations. The ID rules encode the ATM NextM() relation, i.e. C ---* NextM(C) for a universal configuration C. The reduction con- structs an ID rule for every combination of possible head po- sition, machine state, and symbol on the scanned tape square. Principles of universal feature instantiation transfer the rest of the instantaneous description (i.e. contents of the tape) from mother to daughters in ID rules. 1°Our ATM definition is taken from Chandra and Stockmeyer(1976), with the restriction that the work tapes are one-way infinite, instead of two-way infinite. Without loss of generality, we use a 1-tape ATM, so C (Q x r) × (Q × r k × (L,R} x (L,R)) Let NextM(C ) ---- {C0,Cl,... ,Ck}. If C is a universal con- figuration, then we construct an ID rule of the form c ~ Co, Cl,...,ck (6) Otherwise, C is an existential coi~figuration and we construct the k + 1 ID rules c --, c~ vi, 0<i<k (7) A universal ATM configuration is labeled accepting if and only if it has halted and accepted, or if all of its daughters are labeled accepting. We reproduce this with the ID rules in 6 (or 8), which will be admissible only if all subtrees rooted by the RHS nodes are also admissible. An existential ATM configuration is labeled accepting if and only if it has halted and accepted, or if one of its daughters is labeled accepting. We reproduce this with the ID rules in 7 (or 9), which will be admissible only if one subtree rooted by a RHS node is admissible. All features that represent tape squares are declared to be in the HEAD feature set, and all daughter categories in the constructed ID rules are head daughters, thus ensuring that the Head Feature Convention (HFC) will transfer the tape contents of the mother to the daughter(s), modulo the tape writing ac- tivity specified by the next move relation. Details. Le__tt Result0M(i, a, d) = [[HEAD0 i+ll,[i a],[A 1]] ifd=R [[HEAD0 i - 1],[i a], [A 1]] if d = L ResultlM(j, c, p, d) = [[HEAD1 j+l],[rf c][STATE p]] if d= R [[HEAD1 j- l],[ri c][STATE pl] if d= L TransM(q, a, b) = ((p, c, dl, d2): ((q, a, b), (p;c, dl, d2>) e B} where a is the read-only (R/O) tape symbol currently being scanned b is the read-write (R/W) tape symbol cur- rently being scanned dl is the R/O tape direction d2 is the R/W tape direction The GPSG G contains: 1. Feature definitions 35 A category in K ° represents a node of an ATM compu- tation tree, where the features in Atom encode the ATM configuration. Labeling is performed by ID rules. (a) definition of F, Atom, A F : Atom = A = {STATE ,HEADO ,HEAD1 ,A} u {i:O<i<[wl+l } u {ri: 1 _< j _< S(Iwl)} Q U E U r ; as defined earlier (b) definition of p0 p°(A) = {1,2,3} p°(STATE ) = Q ; the ATM state set p°(HEADO ) : {j: 1 < j <-I~1} p°(HEAD1 ) = {i: 1 < i < S(I~I)} vf • {;: o < ; <1~1 +1} po(f) = Z U {$} ; the ATM input alphabet Vf • {ry : 1 < j < s(l~l)} pO(f) = F ; the ATM tape alphabet (c) definition of HEAD feature set HEAD = {i: 0 _< ; -<M +l}u{rj. : 1 _< j _< S(l~l)} (d) FCRs to ensure full specification of all categories ex- cept null ones. Vf.f e Atom, [STATE ] D [f] 2. Grammatical rules Vi,j,q,a,b :1< / <lwl, 1 < J-< S(I~I), qcQ, aeZ, bet if TransM(q, a, b) # @, construct the following ID rules. (a) if q • U (universal state) {[HEADO i], [i a], [HEAD1 j], Jr; b], [STATE q], [A I]} --* {ResultOM(i, a, dlk) U Result 1M(j, ck, Pk, d2k) : (Pk, ck, dlk, d2k) e TransM(q, a, b)} (s) where all categories on the RHS are heads. (b) otherwise q • Q - U (existential state) V(pk, ck, dlk, d2~) E TransM(q, a, b), {[HEADO i], [i a], [HEAD1 j], [rj b], [STATE q], [A I]} ---+ ResultOM({ , a, dlk ) U Result 1M(], ck,pk , d2k ) (9) where all categories on the RHS are heads. (c) One ID rule to terminate accepting states, using null- transitions. {[STATE h], [1 Y]} --* ~ (10) (d) Two ID rules to read input strings and begin the ATM simulation. The A feature is used to separate functionally distinct components of the grammar. [A 1] categories participate in the direct ATM simula- tion, [A 2] categories are involved in reading the in- put string, and the [A 3] category connects the read input string with the ATM simulation start state. START---* {[A 1]},{[A 21} (11) {[a 2]}--~ {[A 2]},{[A 2]} where all daughters are head daughters, and where START : {[HEAD0 1],[HEAD1 I],[STATE s],[A 3]} u {[rj #1 : 1 _< j _< s(M)} (e) the lexical rules, Va, i acE, l<i<lwl, < ~;,{[A 2],[; ~]} > (12) vi o _< i <lwl +1, < $i,{[A 2],[i $]} > The reduction plainly may be performed in polynomial time in the size of the simulated ATM, by inspection. No metarules or LP statements are needed, although recta- rules could have been used instead of the Head Feature Conven- tion. Both devices are capable of transferring the contents of the ATM tape from the mother to the daughter(s). One metarule would be needed for each tape square/tape symbol combination in the ATM. GKPS Definition 5.14 of Admissibility guarantees that ad- missible trees must be terminated, n By the construction above see especially the ID rule 10 -- an [A 1] node can be termi- nated only if it is an accepting configuration (i.e. it has halted and printed Y on its first square). This means the only admis- sible trees are accepting ones whose yield is the input string followed by a very long empty string. P.C.P **The admissibility of nonlocal trees is defined as follows (GKPS, p.104): Definition: Admissibility Let R be a set of ID rules. Then a tree t is admissible from R if and only if 1. t is terminated, and 2. every local subtree in. t is either terminated or locally admissible from some r 6 R. 36 3.3 Sources of Intractability The two sources Of intractability in GPSG theory spotlighted by this reduction are null-transitions in ID rules (see the ID rule 10 above), and universal feature instantiation (in this case, the Head Feature Convention). Grammars with unrestricted null-transitions can assign elab- orate phrase structure to the empty string, which is linguisti- cally undesirable and computationally costly. The reduction must construct a GPSG G and input string x in polynomial time such that x E L(G) iff w E L(M), where M is a PSPACE- bounded ATM with input w. The 'polynomial time' constraint prevents us from making either x or G too big. Null-transitions allow the grammar to simulate the PSPACE ATM computation (and an Exp-Poly TM computation indirectly) with an enor- mously long derivation string and then erase the string. If the GPSG G were unable to erase the derivation string, G would only accept strings which were exponentially larger than M and w, i.e. too big to write down in polynomial time. The Head Feature Condition transfers HEAD feature val- ues from the mother to the head daughters just in case they don't conflict. In the reduction we use HEAD'features to en- code the ATM tape, and thereby use the HFC to transfer the tape contents from one" ATM configuration C (represented by the mother) to its immediate successors Co,... ,Cn (the head daughters}. The configurations C, C0,... ,Ca have identical tapes, with the critical exception of one tape square. If the HFC en- forced absolute agreement between the HEAD features of the mother and head daughters, we would be unable to simulate the PSPACE ATM computation in this manner. 4 Interpreting the Result 4.1 Generative Power and Computational Com- plexity At first glance, a proof that GPSG-Recognition is Exp-Poly hard appears to contradict the fact that context-free languages can be recognized in O(n s) time by a wide range of algorithms. To see why there is no contradiction, we must first explicitly state the argument from weak context-free generative power, which we dub the efficient parsability (EP) argument. The EP argument states that any GPSG can be converted into a weakly equivalent context-free grammar (CFG), and that CFG-Recognition is polynomial time; therefore, GPSG-Recognition must also be polynomial time. The EP argument continues: if the conversion is fast, then GPSG-Recognition is fast, but even if the conversion is slow, recognition using the "compiled" CFG will still be fast, and we may justifiably lose interest in recogni- tion using the original, slow, GPSG. The EP argument is misleading because it ignores both the effect conversion has on grammar size, and the effect grammar size has on recognition speed. Crucially, grammar size affects recognition time in all known algorithms, and the only gram- mars directly usable by context-free parsers, i.e. with the same complexity as a CFG, are those composed of context-free pro- ductions with atomic nonterminal symbols. For GPSG, this is the set of admissible local trees, and this set is astronomical: o((3 m~','+') (Iz) in a GPSG G of size m. ]~ Context-free parsers like the Earley algorithm run in time O(I G' j2 .n3) where I G'I is the size of the CFG G' and n the input string length, so a GPSG G of size m will be recognized in time O(3=.m!m=~+' . ~3) (14) The hyper-exponential term will dominate the Earley algo- rithm complexity in the reduction above because m is a function of the size of the ATM we are simulating. Even if the GPSG is held constant, the stunning derived grammar size in formula 13 turns up as an equally stunning 'constant' multiplicative factor in 14, which in turn will dominate the real-world performance of the Earley algorithm for all expected inputs (i.e. any that can be written down in the universe), every time we use the derived grammar.iS Pullum(1985) has suggested that "examination of a suitable 'typical' GPSG description reveals a ratio of only 4 to I between expanded and unexpanded grammar statements," strongly im- plying that GPSG is efficiently processable as a consequence. 14 But this "expanded grammar" is not adequately expanded, i.e. it is not composed of context-free productions with unanalyz- 12As we saw above, the metarule finite closure operation can increase the ID rule grammar size from I R I = O(I G I) to O(m 2~) in a GPSG G of size m. We ignore the effects of ID/LP format on the number of admissible local trees here, and note that if we expanded out all admissible linear precedence possibilities in FC(M,R}, the resultant 'ordered' ID rule grammar would be of size O(rn2'~7). In the worst case, every symbol in FC(M,R) is underspecified, and every category in K extends every symbol in the FC(M,R} grammar. Since there are o(s--,') possible syntactic categories, and O(m TM) symbols in FU(M,R), the number of admissible local trees (= atomic context-free productions} in G is o((3~.~,) ,,,,') = o(s~, ,,,,~*' ) i.e. astronomical. Ristad(1986) argues that the minimal set of admissible local trees in GKPS' GPSG for English is considerably smaller, yet still contains more than 10 z° local trees. laThe compiled grammar recognition problem is at least as intractable as the uncompiled one. Even worse, Barton{1985) shows how the grammar expansion increases both the space and time costs of recognltlon, when compared to the cost of using the grammar directly. 14Thls substantive argument is somewhat strange coming from a co-author of a book which advocates the purely formal investigation of linguistics: "The universalism [of natural language 1 is, ultimately, intended to be en- tirely embodied in the formal system, not expressed by statements made in it.'GKPS(4). It is difficult to respond precisely to the claims made in Pul- Ium(1985), since the abstract is (necessarily) brief and consists of assertions unsupported by factual documentation or clarifying assumptions. 37 able nonterminal symbols. 15 These informal tractability argu- ments are a particular instance of the more general EP argument and are equally misleading. The preceding discussion of how intractability arises when converting a GPSG into a weakly equivalent CFG does not in principle preclude the existence of an efficient compilation step. If the compiled grammar is truly fast and assigns the same struc- tural descriptions as the uncompiled GPSG, and it is possible to compile the GPSG in practice, then the complexity of the uni- versal recognition problem would not accurately reflect the real cost of parsing. 16 But until such a suggestion is forthcoming, we must assume that it does not exist. 1~,1s iS,Expanded grammar" appears to refer to the output of metarule finite closure (i.e. ID rules), and this expanded grammar is tra,=table only if the grammar is directly usable by the Earley algorithm exactly as context- free productions are: all noaterminals in the context-free productions must be unanalyzable. But the categories and ID rules of the metarule finite closure grammar do not have this property. Nonterminals in GPSG are decomposable into a complex set of feature specifications and cannot be made atomicj in part because not all extensions of ID rule categories are legal. For example, the categories -OO01Vl~[-tCF1g}~ PA$] and VP[+INV, VFOI~ FIN] are not legal extensions of VP in English, while VP [÷INV, +AUX. VFORI~ FINI is. FCRs, FSDs, LP statements, and principles of universal feature instantiation -- all of which contribute to GPSG's intractability -- must all still apply to the rules of this expanded grammar. Even if we ignore the significant computational complexity introduced by the machinery mentioned in the previous paragraph (i.e. theory of syntac- tic features, FCRs, FSDs, ID/LP format, null-transitions, and metarules), GPSG will still not obtain an e.fficient parsability result. This is because the Head Feature Convention alone ensures that the universal recognition prob- lem for GPSGs will be NP-hard and likely to be intractable. Ristad(1986) contains a proof. This result should not be surprising, given that (1) prin- ciples of universal feature instant]ation in current GPSG theory replace the metarules of earlier versions of GPSG theory, and (2) metarules are known to cause intractability in GPSG. ~6The existence or nonexistence of efficient compilation functions does not affect either our scientific interest in the universal grammar recognition problem or the power and relevance of a complexity analysis. If complexity theory classifies a problem as intractable, we learn that something more must be said to obtain tractability, and that any efficient compilation step, if it exists at all, must itself be costly. 17Note that the GPSG we constructed in the preceding reduction will actually accept any input x of length less than or equal to Iwl if and only if the ATM M accepts it using S(]wl) space. We prepare an input string $ for the GPSG by converting it to the string $0xl lx22.., xn nSr~-1 e.g. shades is accepted by the ATM if and only if the string $Oalb2a3d4e5e657 is accepted by the GPSG. Trivial changes in the grammar allows us to per- mute and "spread" the characters of • across an infinite class of strings in an unbounded number of ways, e.g. $O'~x~i'~2...~zll'yb...?~$a÷l where each ~ is a string over an alphabet which is distinct from the ~i alphabet. Although the flexibility of this construction results in a more complicated GPSG, it argues powerfully against the existence of any effi- cient compilation procedure for GPSGs. Any efficient compilation proce- dure must perform more than an exponential polynomial amount of work (GPSG-Recognition takes at least Exp-Poly time) on at least an exponen- tial number of inputs (all inputs that fit in the t w t space of the ATM's read-only tape). More importantly, the required compilation procedure will convert say exponential-polynomial time bounded Turing Machine into a polynomial*time TM for the class inputs whose membership can be deter- mined within a arbitrary (fixed) exp-poly time bound. Simply listing the accepted inputs will not work because both the GPSG and TM may ac- cept an infinite class of inputs. Such a compilation procedure would be extremely powerful. lSNote that compilation illegitimately assumes that the compilation step 4.2 Complexity and Succinctness The major complexity result of this paper proves that the fastest algorithm for GPSG-Recognition must take more than exponen- tial time. The immediately preceding section demonstrates ex- actly how a particular algorithm for GPSG-Recognition (the EP argument) comes to grief: weak context-free generative power does not ensure efficient parsability because a GPSG G is weakly equivalent to a very large CFG G ~, and CFG size affects recog- nition time. The rebuttal does not suggest that computational complexity arises from representational succinctness, either here or in general. Complexity results characterize the amount of resources needed to solve instances of a problem, while succinctness results mea- sure the space reduction gained by one representation over an- other, equivalent, representation. There is no casual connection between computational com- plexity and representational succinctness, either in practice or principle. In practice, converting one grammar into a more suc- cinct one can either increase or decrease the recognition cost. For example, converting an instance of context-free recognition (known to be polynomial time) into an instance of context- sensitive recognition (known to be PSPACE-complete and likely to be intractable) can significantly speed the recognition prob- lem if the conversion decreases the size of the CFG logarithmi- cally or better. Even more strangely, increasing ambiguity in a CFG can speed recognition time if the succinctness gain is large enough, or slow it down otherwise -- unambiguous CFGs can be recognized in linear time, while ambiguous ones require cubic time. In principle, tractable problems may involv~ succinct rep- resentations. For example, the iterating coordination schema (ICS) of GPSG is an unbeatably succinct encoding of an infi- nite set of context-free rules; from a computational complexity viewpoint, the ICS is utterly trivial using a slightly modified Earley algorithm. 19 Tractable problems may also be verbosely represented: consider a random finite language, which may be recognized in essentially constant time on a typical computer (using a hash table), yet whose elements must be individually listed. Similarly, intractable problems may be represented both succinctly and nonsuccinctly. As is well known, the Turing ma- chine for any arbitrary r.e. set may be either extremely small or monstrously big. Winning the game of chess when played on an n x n board is likely to be computationMly intractable, yet the chess board is not intended to be an encoding of another representation, succinct or otherwise. is free. There is one theory of primitive language learning and use: conjec- ture a grammar and use it. For this procedure to work, grammars should be easy to test on small inputs. The overall complexity of learning, testing, and speech must be considered. Compilation speeds up the speech com- ponent at the expense of greater complexity in the other two components. For this linguistic reason the compilation argument is suspect. X~A more extreme example of the unrelatedness of succinctness and com- plexity is the absolute succinctness with which the dense language ~" may be represented -- whether by a regular expression, CFG, or even Taring machine -- yet members of E ° may be recognized in constant time (i.e. always accept). 38 Tractable problems may involve succinct or nonsuccinct rep- resentations, as may intractable problems. The reductions in this paper show that GPSGs are not merely succinct encod- ings of some context-free grammars; they are inherently com- plex grammars for some context-free languages. The heart of the matter is that GPSG's formal devices are computationally complex and can encode provably intractable problems. 4.3 Relevance of the Result In this paper, we argued that there is nothing in the GPSG for- mal framework that guarantees computational tractability: pro- ponents of GPSG must look elsewhere for an explanation of efficient parsability, if one is to be given at all. The crux of the matter is that the complex components of GPSG theory interact in intractable ways, and that weak context-free gener- ative power does not guarantee tractability when grammar size is taken into account. A faithful implementation of the GPSG formalisms of GKPS will provably be intractable; expectations computational linguistics might have held in this regard are not fulfilled by current GPSG theory. This formal property of GPSGs is straightforwardly inter- esting to GPSG linguists. As outlined by GKPS, "an important goal of the GPSG approach to linguistics [is! the construction of theories of the structure of sentences under which significant properties of grammars and languages fall out as theorems as opposed to being stipulated as axioms (p.4)." The role of a computational analysis of the sort provided here is fundamentally positive: it can offer significant formal insights into linguistic theory and human language, and sug- gest improvements in linguistic theory and real-world parsers. The insights gained may be used to revise the linguistic theory so that it is both stronger linguistically and weaker formally. Work on revising GPSG is in progress. Briefly, some proposed changes suggested by the preceding reductions are: unit feature closure, no FCRs or FSDs, no null-transitions in ID rules, meta- rule unit closure, and no problematic feature specifications in the principles of universal feature instantiation. Not only do these restrictions alleviate most of GPSG's computational in- tractability, but they increase the theory's linguistic constraint and reduce the number of nonnatural language grammars li- censed by the theory. Unfortunately, there is insufficient space to discuss these proposed revisions here -- the reader is referred to Ristad(1986) for a complete discussion. Acknowledgments. Robert Berwick, Jim Higginbotham, and Richard Larson greatly assisted the author in writing this paper. The author is also indebted to Sandiway Fong and David Waltz for their help, and to the MIT Artificial Intelligence Lab and Thinking Machines Corporation for supporting this research. Barton, G.E. (1985). "On the Complexity of ID/LP Parsing," Computational Linguistics, 11(4): 205-218. Chandra, A. and L. Stockmeyer (1976). "Alternation," 17 th Annual Symposium on Foundations of Computer Science,: 98-108. Gazdar, G. (1981). "Unbounded Dependencies and Coordinate Structure," Linguistic Inquiry 12: 155-184. Gazdar, G., E. Klein, G. Pullum, and I. Sag (1985). Gener- alized Phrase Structure Grammar. Oxford, England: Basil Blackwell. Garey, M, and D. Johnson (1979). Computers and Intractabil- ity. San Francisco: W.H. Freeman and Co. Hopcroft, J.E., and J.D. Ullman (1979). Introduction to Au- tomata Theory, Languages, and Computation. Reading, MA: Addison-Wesley. Pullum, G.K. (1985). "The Computational Tractability of GPSG," Abstracts of the 60th Annual Meeting of the Linguistics So- ciety of America, Seattle, WA: 36. Ristad, E.S. (1985). "GPSG-Recognition is NP-hard," A.I. Memo No. 837, Cambridge, MA: M.I.T. Artificial Intelli- gence Laboratory. Ristad, E.S. (1986). "Complexity of Linguistic Models: A Com- putational Analysis and Reconstruction of Generalized Phrase Structure Grammar," S.M. Thesis, MIT Department of Elec- trical Engineering and Computer Science. (In progress). 5 References 39
1986
6
DEFINING NATURAL LANGUAGE GRAMMARS IN GPSG Eric Sven Ristad MIT Artificial Intelligence Lab Thinking Machines Corporation 545 Technology Square and 245 First Street Cambridge, MA 02139 Cambridge, MA 02142 1 Overview Three central goals of work in the generalized phrase struc- ture grammar (GPSG) linguistic framework, as stated in the leading book "Generalized Phrase Structure Grammar" Gaz- dar et al (1985) (hereafter GKPS), are: (1) to characterize all and only the natural language grammars, (2) to algorithmically determine membership and generative power consequences of GPSGs, and (3) to embody the universalism of natural lan- guage entirely in the formal system, rather than by statements made in it. 1 These pages formally consider whether GPSG's weak context- free generative power (wcfgp) will allow it to achieve the three goals. The centerpiece of this paper is a proof that it is unde- cidable whether an arbitrary GPSG generates the nonnatural language ~'. On the basis of this result, I argue that GPSG fails to define the natural language grammars, and that the gen- erative power consequences of the GPSG framework cannot be algorithmically determined, contrary to goals one and two. 2 In the process, I examine the linguistic universalism of the GPSG formal system and argue that GPSGs can describe an infinite class of nonnatural context-free languages. The paper concludes with a brief diagnosis of the result and suggests that the problem might be met by abandoning the weak context-free generative power framework and assuming substantive constraints. 1.1 The Structure of GPSG Theory A generalized phrase structure grammar contains five language- particular components (immediate dominance (ID) rules, meta- rules, linear precedence (LP) statements, feature co-occurrence IGKPS clearly outline their goals. One, uto arrive at a constrained met- alanguage capable of defining the grammars of natural languages, but not the grammar of, say, the set of prime numbers2(p.4). Two, to construct an explicit linguistic theory whose formal consequences are clearly and eas- ily determinable. These 'formal consequences' include both the generative power consequences demanded by the first goal and membership determi- nation: GPSG regards languages "as collections whose membership is def- initely and precisely specifiable."(p.1) Three, to define a linguistic theory where ~lhe universalism [of natural language] is, ultimately, intended to be entirely embodied in the formal system, not ezpressed by statements made in it.'(p.4, my emphasis) 2The proof technique make use of invalid computations, and the actual GPSG constructed is so simple, so similar to the GPSGs proposed for actual natural languages, and so flexible in its exact formulation that the method of proof suggests there may be no simple reformulations of GPSG that avoid this problem. The proof also suggests that it is impossible in principle to algorithmically determine whether linguistic theories based on a wcfgp framework (e.g. GPSG) actually define the natural language grammars. restrictions (FCRs), and feature specification defaults (FSDs)) and four universal components: a theory of syntactic features, principles of universal feature instantiation, principles of seman- tic interpretation, and formal relationships among various com- ponents of the grammar. 3 The set of ID rules obtained by taking the finite closure of the metarules on the ID rules is mapped into local phrase structure trees, subject to principles of universal feature instan- tiation, FSDs, FCRs, and LP statements. Finally, these local trees are assembled to form phrase structure trees, which are termmated by lexical elements. The essence of GPSG is the constrained mapping of ID rules into local trees. The constraints of GPSG theory subdivide into absolute constraints on local trees (due to FCRs and LP- statements) and relative constraints on the rule to local tree mapping (stemming from FSDs and universal feature instan- tiation). The absolute constraints are all language-particular, and consequently not inherent in the formal GPSG framework. Similarly, the relative constraints, of which only universal in- stantiation is not explicitly language-particular, do not apply to fully specified ID rules and consequently are not strongly in- herent in the GPSG framework either. 4 In summary, GPSG local trees are only as constrained as ID rules are: that is, not at all. The only constraint strongly inherent in GPSG theory (when compared to context-free grammars (CFGs)) is finite feature closure, which limits the number of GPSG nonterminal symbols to be finite and bounded. S 1.2 A Nonnatural GPSG Consider the exceedingly simple GPSG for the nonnatural lan- guage Z*, consisting solely of the two ID rules SThis work is based on current GPSG theory as presented in GKPS. The reader is urged to consult that work for a formal presentation and thorough exposition of current GPSG theory. 4I use "strongly inherent" to mean ~unavoidable by virtue of the formal framework." Note that the use of problematic feature specifications in universal feature instantiation means that this constraint is dependent on other, parochial, components (e.g. FCRs). Appropriate choice of FCRs or ID rules will abrogate universal feature inetantiation, thus rendering it implicitly language particular too. 5This formal constraint is extremely weak, however, since the theory of syntactic features licenses more than 10 TM syntactic categories. See Ristad, E.S. (1986), ~Computational Complexity of Current GPSG Theory ~ in these proceedings for a discussion. 40 S ---* {},H I E This G PSG generates local trees with all possible subcategoriza- tion specifications -- the SUBCAT feature may assume any value in the non-head daughter of the first ID rule, and S generates the nonnatural language ~*. This exhibit is inconclusive, however. We have only shown that GKPS -- and not GPSG -- have failed to achieve the first goal of GPSG theory. The exhibition leaves open the possibility of trivially reformalizing GPSG or imposing ad-hoc constraints on the theory such that I will no longer be able to personally construct a GPSG for Z*. 2 Undecidability and Generative Power in GPSG That "= Z*?" is undecidable for arbitrary context-free gram- mars is a well-known result in the formal language literature (see Hopcraft and Ullman(1979:201-203)). The standard proof is to construct a PDA that accepts all invalid computations of a TM M. From this PDA an equivalent CFG G is directly con- structible. Thus, L(G) = ~' if and only if all computations of M are invalid, i.e. L(M) = 0. The latter problem is undecid- able, so the former must be also. No such reduction is possible for a proof that "-- ~*?" is undecidable for arbitrary GPSGs. In the above reduction, the number of nonterminals in G is a function of the size of the simulated TM M. GPSGs, however, have a bounded number of nonterminal symbols, and as discussed above, that is the essential difference between CFGs and GPSGs. Only weak generative power is of interest for the follow- ing proof, and the formal GPSG constraints on weak generative power are trivially abrogated. For example, exhaustive constant partial ordering (ECPO) -- which is a constraint on strong gen- erative capacity -- can be done away with for all intents and purposes by nonterminal renaming, and constraints arising from principles of universal feature instantiation don't apply to fully instantiated ID rules. First, a proof that "-- ~*?" is undecidable for context-free grammars with a very small number of terminal and nonter- minal symbols is sketched. Following the proof for CFGs, the equivalent proof for GPSGs is outlined. 2.1 Outline of a Proof for Small CFGs Let L(z,~ ) be the class of context-free grammars with at least x nonterminal and y terminal symbols. I now sketch a proof that it is undecidable of an arbitrary CFG G c L(~,v ) whether L(G) = ~* for some x, y greater than fixed lower bounds. The actual construction details are of no obvious mathematical or pedagogical interest, and will not be included. The idea is to directly construct a CFG to generate the invalid computa- tions of the Universal Turing Machine (UTM). This grammar will be small if the UTM is small. The "smallest UTM" of Minsky(1967:276-281) has seven states and a four symbol tape alphabet, for a state-symbol product of 28 (!). Hence, it is not surprising that the "smallest GUT M" that generates the invalid computations of the UTM has seventeen nonterminals and two terminals. Observe that if a string w is an invalid computation of the universal Turing machine M = (Q,]E, r, 5, q0, B, F) on input x, then one of the following conditions must hold. 1. w has a "syntactic error," that is, w is not of the form Xl~g2~''" ~Xm~ , where each xi is an instantaneous de- scription (ID) of M. Therefore, some xl is not an ID of M. 2. xl is not initial; that is, Xl ~ q0~* 3. x,~ is not final; that is xm ~ r*fF* 4. x~ F-. M (X~+l) R is false for some odd i 5. (xi) R ~-*M Xi+l is false for some even i Straightforward construction of GVTM will result in a CFG containing on the order of twenty or thirty nonterminals and at least fifteen terminals (one for each UTM state and tape symbol, one for the blank-tape symbol, and one for the instan- taneous description separator "~'). Then the subgrammars which ensure that (xi) R ~-~'M xi+l is false for some even i and that x~ ~--~M (xi+l) R is false for some odd i may be cleverly combined so that nonterminals encode more information, and SO on. The final trick, due to Albert Meyer, reduces the terminals to 2 at the cost of a lone nonterminal by encoding the n ter- minals as log n -- k-bit words over the new terminal alphabet {0, 1}, and adding some rules to ensure that the final grammar could generate ]E* and not (~4).. The productions N4 --* OL41L4 I OOL4 I 01L~ I llL4 I ... are added to the converted CFG GtVTM, which generates a language of the form L4 --* oooo I OOOl ] OOlO I ... I E I L4L4 Where L4 generates all symbols of length 4, and N4 gener- ates all strings not of length 0 rood k, where k = 4 (i.e. all strings of length 1,2,3 mod 4). Deeper consideration of the ac- tual GUTM reveals that the N4 nonterminal is also eliminable. Note that all the preceding efforts to reduce the number of nonterminals and terminals increase the number of context-free productions. This symbol-production tradeoff becomes clearer when one actually constructs GUTM. Suppose the distinguished start symbol for GVTM is SUTM. Then we form a new CFG consisting Of all productions of the form 41 S ---* {Q - q0}{E p - (M}}{N4 U L4} and the one production S ---* SUT M where (M} is the length p encoding of an arbitrary TM M, and L4, N4 are as defined above. This ensures that strings whose prefix is "q0(M)" will be generated starting from S if and only if they are generated start- ing from SVrM: that is, they are invalid computations of the UTM on M. 2.2 Some Details for Lc~,v ) and GPSG Let the nonterminal symbols F, Q, and E in the following CFG portion generate the obvious terminal symbols corresponding to the equivalent UTM sets. B is the terminal blank symbol. Then, the following sketched CF productions generate the IDs of M such that zi ~---~M (Xi+l) R is false for some odd i. The $4 and $5 nonterminals are used to locate the even and odd i IDs zi of w. Sok generates the language {F t_J #}*. s4 -~ rs4 I #s5 I #SoddSok S5 -~ rs5 I#s4 I #s,.,.Sok $odd -~ Sl# Sl ~ rs~r I s2 I s6l s7 Ss -~ rs~ [ rs3 s7 -, srr I ssr $2 --* EaESzFbF where a # b, both in E s~ -. aqbSa{r s - pca} if 8(q, b) = (p, c, R) aqbSs{r s - cap} if 8(q,b) = (p,c,L) S2 --* aqB#B{r s - pca} if 8(q, B) = (p,c, R) aqB#B{r 3 - cap} if 8(q, B) = (p, c, L) s3 -. rs~r I QB#Brr I ZB#Br $1 and $2 must generate a false transition for odd i, while Sz need not generate a false transition and is used to pad out the IDs of w. The nonterminals Se,S7 accept IDs with improperly different tape lengths. The first $2 production accepts transi- tions where the tape contents differ in a bad place, the second $2 production accepts invalid transitions other than at the end of the tape, and the third $2 accepts invalid end of the tape transi- tions. Note that the last two $2 productions are actually classes of productions, one for each string in F 3 -pca, F 3 - cap,.... The GPSG for "= E*?" is constructed in a virtually iden- tical fashion. Recall that the GPSG formal framework does not bar us from constructing a grammar equivalent to the CFG just presented. The ID rules used in the construction will be fully specified so as to defeat universal feature instantiation, and the construction will use nonterminal renaming to avoid ECPO. Let the GPSG category C be fully specified for all features (the actual values don't matter) with the exception of, say, the binary features GER, NEG. NULL and POSS. Arrange those four features in some canonical order, and let binary strings of length four represent the values assigned to those features in a given category. For example, C[0100] represents the category C with the additional specifications ([-GER], [+NEG], [-NULL], [- POSS]). We replace Soda by C[0000], S1 by C[0001], $2 by C[0010], $3 by C[0011], $6 by C[0100], and Sr by C[0101]. The nonterminal r is replaced by three symbols of the form C[1 l xx], one for each linear precedence r conforms too. Similarly, Y. is replaced by two symbols of the form C[100x]. The ID rules, in the same order as the CF productions above (with a portion of the necessary LP statements) are: c[oooo] -~ c[oool]# C[0001] -* C[llO0]C[O001]C[llO1]{C[O010][C[0100]]C[OIO1] C[OIO0]--* C[llO0]C[OIO0] I C[llO0]C[O011 ] cIolol] -~ C[OlOl]C[llOlltC[oonlc[llOl] c[oolo] -~ C[10001aC[lOO1]C[OOn]C[XXO1]bC[U101 where a ~ b, both in E C[0010] ~ aqbC[00u]{r ~- pca} if6(q,b) = (p,c,R) aqbC[oon]{r 3 - cap} if 8(q,b) = (p,c,L) C[0010] --* aqB#B{r s -pca} if 8(q, B) = (p, c, R) aqB#B{r 3 - cap} if 8(q,B) = (p,c,L) C[0011] -~ C[1100]C[0011]C[1101] ] QB#BC[llO0]C[ll01] I C [1000] B# BC [1100] C[ll00] < C[O001],C[O011],C[OIO0],C[OIO1] < C[ll01] C[I000] < a < C[1001] < C[0011] < C[1110] While the sketched ID rules are not valid GPSG rules, just as the sketched context-free productions were not the valid com- ponents of a context-free grammar, a valid GPSG can be con- structed in a straightforward and obvious manner from the sketched ID rules. There would be no metarules, FCRs or FSDs in the actual grammar. The last comment to be made is that in the actual GUTM, only the number of productions is a function of the size of the UTM. The UTM is used only as a convincing crutch -- i.e. not at all. Only a small, fixed number of nonterminals are needed to construct a CFG for the invalid computations of any arbitrary Turing Machine. 3 Interpreting the Result The preceding pages have shown that the extremely simple non- natural language ~* is generated by a GPSG, as is the more complex language Llc consisting of the invalid computations of an arbitrary Turing machine on an arbitrary input. Because 42 Llc is a GPSG language, "= E'?" is undecidable for GPSGs: there is no algorithmic way of knowing whether any given GPSG generates a natural language or an unnatural one. So, for ex- ample, no algorithm can tell us whether the English GPSG of GKPS really generates English or ~*. The result suggests that goals 1, 2, 3 and the context-free framework conflict with each other. Weak context-free gener- ative power allows both ~* and Lie, yet by goal 1 we must exclude nonnatural languages. Goal 2 demands it be possi- ble to algorithmically determine whether a given GPSG gener- ates a desired language or not, yet this cannot be done in the context-free framework. Lastly, goal 3 requires that all nonnat- ural languages be excluded on the basis of the formal system alone, but this looks to be impossible given the other two goals, the adopted framework, and the technical vagueness of "natural language grammar." The problem can be met in part by abandoning the context- free framework. Other authors have argued that natural lan- guage is not context-free, and here we argue that the GPSG theory of GKPS can characterize context-free languages that are too simple or trivial to be natural, e.g. any finite or reg- ular language. 6 The context-free framework is both too weak and too strong -- it includes nonnatural languages and excludes natural ones. Moreover, CFL's have the wrong formal proper- ties entirely: natural language is surely not closed under union, concatenation, Kleene closure, substitution, or intersection with regular sets! 7 In short, the context-free framework is the wrong idea completely, and this is to be expected: why should the ar- bitrary generative power classifications of mathematics (formal language theory) be at all relevant to biology (human language)? Goal 2, that the naturalness of grammars postulated by linguistic theory be decidable, and to a lesser extent goal 3, are of dubious merit. In my view, substantive constraints aris- ing from psychology, biology or even physics may be freely in- voked, with a corresponding change in the meaning of "natural language grammar" from "mentally-representable grammar" to something like "easily learnable and speakable mentally-representab]£ grammar." There is no a priori reason or empirical evidence to suggest that the class of mentally representable grammars is not fantastically complex, maybe not even decidable, s One promising restriction in this regard, which if properly formulated would alleviate GPSG's actual and formal inability to characterize only the natural language grammars, is strong nativism -- the restrictive theory that the class of natural lan- eWhile 'natural language grammar' is not defined precisely, recent work has demonstrated empirically that natural language is not context-free, and therefore GPSG theory will not be able to characterize all the human lan- guage grammars. See, for example, Higglnbotham(1984), Shieber(1985), and Culy(1985). For counterarguments, see Pullum(1985). Nash(1980), chapter 5, discusses the impossibility of accounting for free word order lan- guages (e.g. Warlplrl) using ID/LP grammars. I focus on the goal of characterizing only the natural language grammars in this paper. VThe finite, bounded number of nonterminals allowed in GPSG theory plays a linguistic role in this regard, because the direct consequence of finite feature closure is that GPSG languages are not truly closed under union, concatenation, or substitution. 8See Chomsky(1980:120) for a discussion. guages is finite. This restriction is well motivated both by the issues raised here and by other empirical considerations. ° The restriction, which may be substantive or purely formal, is a for- mal attack on the heart of the result: the theory of undecidabil- ity is concerned with the existence or nonexistence of algorithms for solving problems with an infinity of instances. Furthermore, the restriction may be empirically plausible, l°'xl The author does not have a clear idea how GPSG might be restricted in this manner, and merely suggests strong nativism as a well-motivated direction for future GPSG research. Acknowledgments. The author is indebted to Ed Barton, Robert Berwick, Noam Chomsky, Jim Higginbotham, Richard Larson, Albert Meyer, and David Waltz for assistance in writ- ing this paper, and to the MIT Artificial Intelligence Lab and Thinking Machines Corporation for supporting this research. 4 References Chomsky, N. (1980) Rules and Representations. New York: Columbia University Press. Gasdar, G., E. Klein, G. Pullum, and I. Sag (1985) General- ized Phrase Structure Grammar. Oxford, England: Basil Blackwell. Higginbotham, J. (1984) "English is not a Context-Free Lan- guage," Linguistic Inquiry 15: 119-126. ~Note that invoking finiteness here is technically different from hiding intractability with finiteness. Finiteness is the correct generalization here, because we are interested in whether GPSG generates nonnatural languages or not, and not in the computational cost of determining the generative capacity of an arbitrary GPSG. A finiteness restriction for the purposes of computational complexity is invalid because it prevents us from properly using the tools of complexity theory to study the computational complexity of a problem. l°See Osherson et. el. (1984) for an exposition of strong nativism and related issues. The theory of strong nativism can be derived in formal learning theory from three empirically motivated axioms: (1) the ability of language learners to learn in noisy environments, (2) language learner mem- ory limitations (e.g. inability to remember long-past utterances), and (3) the likelihood that language learners choose simple grammars over more complex, equivalent ones. These formal results are weaker empirically than they might appear at first glance: the equivalence of Ulearned~ gram- mars is measured using only weak generative capacity, ignoring uniformity considerations. llAn alternate substantive constraint, suggested by Higginbotham (per- sonal communication) and not explored here, is to require natural language grammars to generate non-dense languages. Let the density of a class of lan- guages be an upper bound (across all languages in the class) on the ratio of grammatical utterances to grammatical and ungrammatical utterances, in terms of utterance lengths. If the density of natural languages was small or even logarithmic in utterance length, as one might expect, and a decid- able property of the reformulated GPSG's, then undecidability of "= ]~*?n would no longer reflect on the decidability of whether the GPSG framework characterized all and only the natural language grammars. The exact spec- ification of this density constraint is tricky because unit density decides "= IE'?" , and therefore density measurements cannot be too accurate. Furthermore, ~* and Lic can be buried in other languages, i.e. concate- nated onto the end of an arbitrary (finite or infinite) language, weakening the accuracy and relevance of density measurements. 43 Hopcroft, J.E., and J.D. Ullman (1979) Introduction to Au- tomata Theory, Languages, and Computation. Reading, M.A: Addiso~a- Wesley. Minsky, M. (1967) Computation: Finite and Infinite Machines. Englewood Cliffs, N.J: Prentice-Hall. Nash, D. (1980) "Topics in Warlpiri Grammars," M.I.T. De- partment of Linguistics and Philosophy Ph.D dissertation, Cambridge. Osherson, D., M. Stob, and S. Weinstein (1984) "Learning The- ory and Natural Language," Cognition 17: 1-28. Pullum, G.K. (1985) "On Two Recent Attempts to Show that English is Not a CFL," Computational Linguistics 10: 182- 186. Shieber, S.M. (1985) "Evidence Against the Context-Freeness of Natural Language," Linguistics and Philosophy 8: 333-344. 44
1986
7
CONSTRAINT PROPAGATION IN KIMMO SYSTEMS G. Edward Barton, Jr. M.I.T. Artificial Intelligence Laboratory 545 Technology Square Cambridge, MA 02139 ABSTRACT Taken abstractly, the two-level (Kimmo) morphological framework allows computationally difficult problems to arise. For example, N + 1 small automata are sufficient to encode the Boolean satisfiability problem (SAT) for for- mulas in N variables. However, the suspicion arises that natural-language problems may have a special structure -- not shared with SAT -- that is not directly captured in the two-level model. In particular, the natural problems may generally have a modular and local nature that dis- tinguishes them from more "global" SAT problems. By exploiting this structure, it may be possible to solve the natural problems by methods that do not involve combi- natorial search. We have explored this possibility in a preliminary way by applying constraint propagation methods to Kimmo gen- eration and recognition. Constraint propagation can suc- ceed when the solution falls into place step-by-step through a chain of limited and local inferences, but it is insuffi- ciently powerful to solve unnaturally hard SAT problems. Limited tests indicate that the constraint-propagation al- gorithm for Kimmo generation works for English, Turkish, and Warlpiri. When applied to a Kimmo system that en- codes SAT problems, the algorithm succeeds on "easy" SAT problems but fails (as desired) on "hard" problems. INTRODUCTION A formal computational model of a linguistic process makes explicit a set of assumptions about the nature of the process and the kind of information that it fundamentally involves. At the same time, the formal model will ignore some details and introduce others that are only artifacts of formalization. Thus, whenever the formal model and the actual process seem to differ markedly in properties, a natural assumption is that something has been missed in formalization -- though it may be difficult to say exactly what. When the difference is one of worst-case complexity, with the formal framework allowing problems to arise that are too difficult to be consistent with the received diffi- culty of actual problems, one suspects that the natural computational task might have significant features that the formalized version does not capture and exploit ef- fectively. This paper introduces a constraint propagation method for "two-lever' morphology that represents a pre- liminary attempt to exploit the features of local in]orrna- tion flow and linear separability that we believe are found in natural morphological-analysis problems. Such a local character is not shared by more difficult computational problems such as Boolean satisfiability, though such prob- lems can be encoded in the unrestricted two-level model. Constraint propagation is less powerful than backtracking search, but does not allow possibilities to build up in com- binatorial fashion. TWO-LEVEL MORPHOLOGY The mod~l of morphology developed by "two-level" Kimmo Koskenniemi is att~'active for putting morphological knowledge to use in processing. Two-level rules mediate the relationship between a lexieal string made up of mor- phemes from the dictionary and a surface string corre- sponding to the form a wo~d would have in text. Equiva- lently, the rules correspond, jto finite-state transducers that • • • ~ ~)" ÷ s , . r 1 • . . t z ' l e s . . . Figure 1: The automaton component of the Kimmo sys- tem consists of several two-headed finite-state automata that inspect the lexical/surface correspondence in paral- lel. The automata move together from left to right. (From Karttunen, 1983:176.) 45 ALPHABET x y z T F - ANY = END Figure 2: This is the complete Kimmo genera- tor system for solving SAT problems in the vari- ables x, y, and z. The system includes a con- sistency automaton for each variable in addition to a satisfaction automaton that does not vary from problem to problem. "x-consistency" 3 3 x x = T F = 1: 2 3 1 2: 2 0 2 3: 0 3 3 "y-consistency" 3 3 1: 2 3 1 2: 2 0 2 3: 0 3 3 "z-consistency" 3 3 Z Z = T F = I: 2 3 1 2: 2 0 2 3: 0 3 3 "satisfaction" 3 4 = _- T F i. 2 1 3 2: 2 2 2 1 3. 1 2 0 0 END can be used in generation and recognition algorithms as implemented in Karttunen's (1983) Kimmo system (and others). As shown in Figure 1, the transducers in the "au- tomaton component" (~ 20 for Finnish, for instance) all inspect the lexical/surface correspondence at once in order to implement the insertions, deletions, and other spelling changes that may accompany affixation or inflection. In- sertions and deletions are handled through null characters that are visible only to the automata. A complete Kimmo system also has a "dictionary component" that regulates the sequence of roots and affixes at the lexical level. Despite initial appearances to the contrary, the straight- forward interpretation of the two-level model in terms of finite*state transducers leads to generation and recogni- tion algorithms that can theoretically do quite a bit of backtracking and search. For illustration we will consider the Kimmo system in Figure 2, which encodes Boolean satisfiability for formulas in three variables x, y, and z. The Kimmo generation algorithm backtracks extensively while determining truth-assignments for formulas accord- ing to this system. (See Barton (1986) and references cited therein for further details of the Kimmo system and of the system in Figure 2.) Taken in the abstract, the two-level model allows com- putationally difficult situations to arise despite initial ap- pearances to the contrary, so why shouldn't they also turn up in the analysis of natural languages? It may be that they do turn up; indeed, the relevant mathematical re- ductions are abstractly based on the Kimmo treatment of vowel harmony and other linguistic phenomena. Yet one feels that the artificial systems used in the mathematical reductions are unnatural in some significant way -- that similar problems are not likely to turn up in the analysis of Finnish, Turkish, or Warlpiri. If this is so, then the re- ductions say more about what is thus-far unexpressed in the formal model than about the difficulty of morphological analysis; it would be impossible to crank the difficult prob- lems through the formal machinery, if the machinery could be infused with more knowledge of the special properties of natural language. 1 MODULAR INFORMATION STRUCTURE The ability to use particular representations and pro- cessing methods is underwritten by what may be called the "information structure" of a task -- more abstract than a particular implementation, and concerned with such ques- tions as whether a certain body of information suffices for making certain decisions, given the constraints of the prob- lem. What is it about the information structure of morpho- logical systems that is not captured when they are encoded 1The systems under consideration in this paper deal with ortho- graphic representations, which are somewhat remote from the "more natural" linguist~ level of phonology and contain both more and less information than phonological representations. 46 as Kimmo systems? Are there significant locality princi- ples and so forth that hold in natural languages but not in mathematical systems that encode CNF Boolean satisfac- tion problems (SAT)? Y'erhaps a better understanding of the information relationships of the natural problem can lead to more specialized processing methods that require less searching, allow more parallelism, run more efficiently, or are more satisfying in some other way. A lack of modular information structure may be one way in which SAT problems are unnatural compared to morphological-analysis problems. Making this idea precise is rather tricky, for the Kimmo systems that encode SAT problems are modular in the sense that they involve vari- ous independent Kimmo automata assembled in the usual way. However, the essential notion is that the Boolean sat- isfaction problem has a more interconnected and "global" character than morphological analysis. The solution to a satisfaction problem generally cannot be deduced piece by piece from local evidence. Instead, the acceptability of each part of the solution may depend on the whole problem. In the worst case, the solution is determined by a complex conspiracy among the problem constraints instead of being composed of independently derivable sub- parts. There is little alternative to running through the possible cases in a combinatorial way. In contrast to this picture, in a morphological analy- sis problem it seems more likely that some pieces of the solution can be read off relatively directly from the input, with other pieces falling into place step-by-step through a chain of limited and local inferences and without the kind of "argument by cases" that search represents. We believe the usual situation is for the various complicating processes to operate in separate domains -- defined for in- stance by separate feature-groups -- instead of conspiring closely together. The idea can be illustrated with a hypothetical language that has no processes affecting consonants but several right-to-left harmony processes affecting different features of vowels. By hypothesis, underlying consonants can be read off directly. The right-to-left harmony pro- cesses mean that underlying vowels cannot always be iden- tified when the vowels are first seen. However, since the processes affect different features, uncertainty in one area will not block conclusions in others. For instance, the pro- cessing of consonants is not derailed by uncertainty about vowels, so information about underlying consonants can potentially be used to help identify the vowels. In such a scenario, the solution to an analysis problem is constructed more by superposition than by trying out solutions to in- tertwined constraints. A SAT problem can have either a local or global infor- mation structure; not all SAT problems are difficult. The unique satisfying assignment for the formula (~ v z)&(x v y)&:5 is forced piece by piece; the conjunct ~ forces x to be false, so y must be true, so finally z must be true. In contrast, it is harder to see that the formula is unsatisfiable. The problem is not just increased length; a different method of argument is required. Conclusions about the difficult formula are not forced step by step as with the easy formula. Instead, the lack of "local informa- tion channels" seems to force an argument by cases. A search procedure of the sort used in the Kimmo sys- tem embodies few assumptions about possible modularity in natural-language phonology. Instead, the implicit as- sumption is that any part of an analysis may depend on anything to its left. For example, consider the treatment of a right-to-left long-distance harmony process, which makes it impossible to determine the interpretation of a vowel when it is first encountered in a left-to-right scan. Faced with such a vowel, the current Kimmo system will choose an arbitrary possible interpretation and arrange for even- tual rejection if the required right context never shows up. In the event of rejection, the system will carry out chrono- logical backtracking until it eventually backs up to the er- roneous choice point. Another choice will then be made, but the entire analysis to the right of the choice point will be recomputed -- thus revealing the implicit assumption of possible dependence. By making few assumptions, such a search procedure is able to succeed even in the difficult case of SAT prob- lems. On the other hand, if modularity, local constraint, and limited information flow are more typical than difficult global problems, it is appropriate to explore methods that might reduce search by exploiting this aspect of informa- tion structure. We have begun exploring such methods in a prelim- inary and approximate way by implementing a modular, non-searching constraint-propagation algorithm (see Win- ston (1984) and other sources) for Kimmo generation and recognition. The deductive capabilities of the algorithm are limited and local, reflecting the belief that morpho- logical analyses can generally be determined piece by piece through local processes. The automata are largely decou- pied from each other, reflecting an expectation that phono- logical constraints generally will not conspire together in complicated ways. The algorithm will succeed when a solution can be built up, piece by superimposed piece, by individual au- tomata -- but by design, in more difficult cases the con- straints of the automata will be enforced only in an approx- imate way, with some nonsolutions accepted (as is usual 47 with this kind of algorithm). In general, the guiding as- sumption is that morphological analysis problems actually have the kind of modular and superpositional information structure that will allow constraint propagation to suc- ceed, so that the complexity of a high-powered algorithm is not needed. (Such a modular structure seems consonant with the picture suggested by autosegmental phonology, in which various separate tiers flesh out the skeletal slots of a central core of CV timing slots; see Halle (1985) and references cited thereQ SUMMARIZING COMBINATIONS OF POSSIBILITIES The constraint-propagation algorithm differs from the Kimmo algorithms in its treatment of nondeterminism. In terms of Figure 1, nondeterminism cannot arise if both the lexical surface strings have already been determined. This is true because a Kimmo automaton lists only one next state for a given lexical/surface pair. However, in the more common tasks of generation and recognition, only one of the two strings is given. The generation task that will be the focus here uses the automata to find the surface string (e.g. triea) that corresponds to a lexical string (e.g. try+a) that is supplied as input. As the Kimmo automata progress through the input, they step over one lexical/surface pair at a time. Some lexical characters will uniquely determine a lexical/surface pair; in generation from try+a the first two pairs must be t/t and r/r. But at various points, more than one lex- ical/surface pair will be admissible given the evidence so far. If y/y and y/± are both possible, the Kimmo search machinery tries both pairs in subcomputations that have nothing to do with each other. The choice points can po- tentially build on each other to define a search space that is exponential in the number of independent choice points. This is true regardless of whether the search is carried out depth-first or breadth-first. ~ For example, return to the artificial Kimmo system that decides Boolean satisfiability for formulas in variables x, y, and z (Figure 2). When the initial y of the for- mula yz .x-y-z ,-x.-y is seen, there is nothing to decide between the pairs y/T and y/F. If the system chooses y/T first, the choice will be remembered by the y-consistency automaton, which will enter state 2. Alternatively, if the possibility y/F is explored first, the y-consistency automa- ton will enter state 3. After yz.x.., has been seen, the x-, y-, and z-consistency automata may be in any of the 2See Karttunen {1983:184} on the difference in search order be- tween Karttunen's Kimmo algorithms and the equivalent procedures originally presented by Koskenniemi. following state-combinations: (3,3,2) (2,3,2) (3,2,3) (2,2,3) <3,2,2) (2,2,2) (The combinations (3, 3, 3) and (2, 3, 3) are not reachable because the disjunction yz that will have been processed rules out both y and z being false, but on a slightly dif- ferent problem those combinations would be reachable as well.) The search mechanism will consider these possible combinations individually. Thus, the Kimmo machinery applied to a k-variable SAT problem explores a search space whose elements are k-tuples of truth-values for the variables, represented in the form of k-tuples of automaton states. If there are k = 3 variables, the search space distinguishes among (T, T, T), (T, T, F), and so forth -- among 2 k elements in general. Roughly speaking, the Kimmo machinery considers the el- ements of the search space one at a time, and in the worst case it will enumerate all the elements. Instead of considering the tuples in this space indi- vidually, the constraint-propagation algorithm summarizes whole sets of tuples in slightly imprecise form. For exam- ple, the above set of state-combinations would be summa- rized by the single vector <{2,3}, {2,3}, {2,3)> representing the truth-assignment possibilities (x Z {T,F},y • {T,F},z • {T,F}). The summary is less precise than the full set of state-tuples about the global constraints among the automata; here, the summary does not indicate that the state-combinations (3, 3, 3) and (2, 3, 3) are excluded. The constraint-propa- gation algorithm never enumerates the set of possibilities covered by its summary, but works with the summary it- self. The imprecision that arises from listing the possible states of each automaton instead of listing the possible combinations of states represents a decoupling of the au- tomata. In addition to helping avoid combinatorial blowup, this decoupling allows the state-possibilities for different automata to be adjusted individually. We do not expect that the corresponding imprecision will matter for natural language: instead, we expect that the decoupled automata will individually determine unique states for themselves, a situation in which the summary is precise. 3 For instance, aObviously, this can be true ill a recognition problem only if the input is morphologically unambiguous, in which case it can still fail to hold if the constraint-propagation method is insufficiently powerful to 48 x-consistency 1 ... y-consistency 1 " " z-consistency 1 .... satisfaction 1 "" "" 1 "'" • " 1 ....... 2,3-.. • -'1,2 ...... ~,2"" I .... 1 ""t .... 2,3"" x/T '/' x/F input y z , x ""2,3"" "'2,3"" "'2,3"" ""1,2".. Figure 3: The constraint-propagation algorithm produces this representation when processing the first few characters of the formula yz.x-y-z.-x,-y using the automata from Figure 2. At this point no truth-values have been definitely determined. in the case of generation involving right-to-left vowel har- mony, only the vowel harmony automaton should exhibit nondeterminism, which should be resolved upon process- ing of the necessary right context. The imprecision also will not matter if two constraints are so independent that their solutions can be freely combined, since the summary will not lose any information in that case. CONSTRAINT PROPAGATION Like the Kimmo machinery, the constraint-propagation machinery is concerned with the states of the automata at intercharacter positions. But when nondeterminism makes more than one state-combination possible at some position, the constraint-propagation method summarizes the possi- bilities and continues instead of trying a single guess. The result is a two-dimensional multi-valued tableau containing one row for each automaton and one column for each inter- character position in the input) Figure 3 shows the first few columns that are produced in generating from the SAT rule out invalid possibilities. Note that many cases of morphological ambiguity involve bracketing (e.g. un[loadableJ/[unloadJable) rather than the identity of lexical characters. Though the matter is not discussed here, we propose to handle bracketing ambiguity and lexical- string anabiguity by different mechanisms. In addition, for discussions of morphological ambiguity, it becomes very important whether the input representation is phonetic or non-phonetically orthographic, 4An extra column is needed at each position where a null might be inserted. formula yz ,x-y-z, -x.-y. The initial y can be interpreted as either y/T or y/F, and consequently the y-consistency automaton can end up in either state 2 or state 3. Simi- larly, depending on which pair is chosen, the satisfaction automaton can end up in either state 1 (no true value seen) or state 2 (a true value seen). In addition to the states of the automata, the tableau contains a pair set for each character, initialized to con- tain all feasible lexical/surface pairs (el. Gajek et al., 1983) that match the input character. As Figure 3 suggests, the pair set is common to all the automata; each pair in the pair set must be acceptable to every automaton. If one automaton has concluded that there cannot be a surface g at the current position, it makes no sense to let another automaton assume there might be one. The automata are therefore not completely decoupled, and effects may prop- agate to other automata when one automaton eliminates a pair from consideration. Such propagation will occur only if more than one automaton distinguishes among the pos- sible pairs at a given position. For example, an automaton concerned solely with consonants would be unaffected by new information about the identity of a vowel. Wahz's line-labelling procedure, the best-known early example of a constraint-propagation procedure (el. Win- ston, 1984), proceeds from an underconstrained initial la- belling by eliminating impossible junction labels. A label is impossible if it is incompatible with every possible label at some adjacent junction. The constraint-propagation pro- cedure for Kimrno systems proceeds in much the same way. 49 A possible state of an automaton can be eliminated in four ways: • The only possible predecessor of the state (given the pair set) is ruled out in the previous state set. • The only possible successor of the state (given the pair set) is ruled out in the next state set. • Every pair that allows a transition out of the state is eliminated at the rightward character position. • Every pair that allows a transition into the state is eliminated at the leftward character position. Similarly, a pair is ruled out whenever any automaton be- comes unable to traverse it given the possible starting and ending states for the transition. (There are special rules for the first and last character position. Null characters also require special treatment, which will not be described here.) The configuration shown in Figure 3 is in need of con- straint propagation according to these rules. State 1 of the satisfaction automaton does not accept the comma/comma pair, so state 1 is eliminated from the possible states { 1,2} of the satisfaction automaton after z. State 1 has there- fore been shown as cancelled. However, the elimination of state 1 causes no further effects at this point. The current implementation simplifies the checking of the elimination conditions by associating sets of triples with character positions. Each triple (old state, pair, new state) is a complete description of one transition of a particular automaton. The left, right, and center projections of each triple set must agree with the state sets to the left and right and with the pair set for the position, respectively. Figure 4 shows two of the triple-sets associated with the z-position in Figure 3. The nondeterminism of Figure 3 is finally resolved when the trivial clauses at the end of the formula yz .x-y-z. -x, -y are processed. After x in the clause -x all of the consistency automata are noncommittal, i.e. can be in either state 2 or state 3. The satisfaction automaton was in state 3 before the x because of the minus sign and it can use either of the triples (3,x/T, 1) or (3,x/F,2). However, on the next step it is discovered that only state 2 will allow it to tra- verse the comma that follows the x. The triple (3,x/T, 1) is eliminated and the pair x/T goes with it. The elimina- tion of x/T is propagated to the x-consistency automaton, which loses the triple (2,x/T,2) and can no longer sup- port state 2 in the left and right state sets. The loss of state 2, in turn, propagates leftward on the x-satisfaction line back to the initial occurrence of x. The possibility x/T is eliminated everywhere it occurs along the way. Finally, processing resumes at the right edge. In similar fashion, the trivial clause -y eliminates the possibility y/T throughout the formula. However, this time the effects spread beyond the y-automaton. When the pos- sibility y/T is eliminated from the first pair-set in Figure 3, the satisfaction automaton can no longer support state 2 between the y and z. This leaves (1,z/T,2) as the only active triple for the satisfaction automaton at the second character position. Thus z/F is eliminated and z is forced to truth. When everything settles down, the "easy" for- mula yz,x-y-z,-x,-y has received the satisfying truth- assignment FT, F-F-T, -F, -F. ALGORITHM CHARACTERISTICS The constraint-propagation algorithm shares with the Waltz labelling procedure a number of characteristics that prevent combinatorial blowup: 5 • The initial possibilities at each point are limited and non-combinatorial; in this case, the triples at some po- sition for an automaton can do no worse than to encode the whole automaton, and there will usually be only a few triples. ]t is particularly significant that the num- ber of triples does not grow combinatorially as more automata are added. • Possibilities are eliminated monotonically, so the lim- ited number of initial possibilities guarantees a limited number of eliminations. • After initialization, propagation to the neighbors of a visited element takes place only if a possibility is elim- inated, so the limited number of eliminations guaran- tees a limited number of visits. • Limited effort is required for each propagator visit. However, we have not done a formal analysis of our im- plementation, in part because many details are subject to change. It would be desirable to replace the weak notion of monotonic possibility-elimination with some (stronger) notion of indelible construction of representation, based if possible on phonological features. Methods have also been envisioned for reducing the distance that information must be propagated in the algorithm. The relative decoupling of the automata and the gen- eral nature of constrain~-propagation methods suggests that a significantly parallel implementation is feasible. How- ever, it is uncertain whether the constraint-propagation method enjoys an advanlage on serial machines. It is clear that the Kimmo machinery does combinatorial search while the constraint-propagation machinery does not, but SThroughout this paper, we are ignoring complications related to the possibility of nulls. 50 y-consistency .... 2,3"" z-consistency .... 1 "" z/T z/F .... 2,3 ........ 2,3"" • "2,3 ........ 1 "" (2, z/T,2) <3, z/T,3) <2, z/F, 2) (3, z/F,3) (1,z/T,2) <1, z/F, 3> .... 2,3 .... .... 2,3 .... Figure 4: When the active transitions of each automaton are represented by triples, it is easy to enforce the constraints that relate the left and right state-sets and the pair set. The left configuration is excerpted from Figure 3, while the right configuration shows the underlying triples. The set of triples for the y-consistency automaton could easily be represented in more concise form. we have not investigated such questions as whether an ana- logue to BIGMACHINE precompilation (Gajek et al., 1983) is possible for the constraint-propagation method. BIG- MACHINE precompilation speeds up the Kimmo machin- ery at a potentially large cost in storage space, though it does not reduce the amount of search. The constraint-propagation algorithm for generation has been tested with previously constructed Kimmo au- tomata for English, Warlpiri, and Turkish. Preliminary re- sults suggest that the method works. However, we have not been able to test our recognition algorithm with previously constructed automata. The reason is that existing Kimmo automata rely heavily on the dictionary when used for recognition. We do not yet have our Kimmo dictionaries hooked up to the constraint-propagation algorithms, and consequently an attempt at recognition produces mean- ingless results. For instance, without constraints from the dictionary the machinery may choose to insert suffix- boundary markers + anywhere because the automata do not seriously constrain their occurrence. Figure 5 shows the columns visited by the algorithm when running the Warlpiri generator on a typical example, in this case a past-tense verb form ('scatter-PAST') taken from Nash (1980:85). The special lexical characters I and <u2> implement a right-to-left vowel assimilation process. The last two occurrences of I surface as u under the influ- ence of <u2>, but the boundary # blocks assimilation of the first two occurrences. Here the propagation of constraints has gone backwards twice, once to resolve each of the two sets of I-characters. The final result is ambiguous because our automata optionally allow underlying hyphens to ap- pear on the surface, in accordance with the way morpheme boundaries are indicated in many articles on Warlpiri. The generation and recognition algorithms have also been run on mathematical SAT formulas, with the de- sired result that they can handle "easy" but not "diffi- cult" formulas as described above. ~ For the easy formula (~ v z)&(x v y)&~ constraint propagation determines the solution (T V T)&(F V T)&F. But for the hard formula constraint propagation produces only the wholly uninfor- mative truth-assignment ({T,F} v {T,F} V {T, F})&({T, F} V {T,F}) &({T,F} v {T,F})a({T,F} V {T,F}) &({T,F} v {T, FI)&({T,F} v {T,F}) Since we believe linguistic problems are likely to be more like the easy problem than the hard one, we believe the constraint-propagation system is an appropriate step to- ward the goal of developing algorithms that exploit the information structure of linguistic prob]ems. 6Note that the current classification of formulas as "easy" is dif- ferent from polynomial-time satisfiability. In particular, the restricted problem 2SAT can be solved in polynomial time by resolution, but not every 2SAT formula is "easy ~ in the current sense. 51 012345 1234 2345678910111213 789101112 891011121314 pIrrI#kIjI-rn<u2>: result ambiguous, pirri{O,-}kuju{-.O}rnu Figure 5: This display shows the columns visited by the constraint-propagation algorithm when the Warlpiri generator is used on the form plrrI#kIjI-rn<u2> 'scatter-PAST'. Each reversal of direction begins a new line. Leftward movement always begins with a position adjacent to the current position, but it is an accidental property of this example that rightward movement does also. The final result is ambiguous because the automata are written to allow underlying hyphens to appear optionally on the surface. ACKNOWLEDGEMENTS This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intel- ligence research has been provided in part by the Ad- vanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014- 80-C-0505. This research has benefited from guidance and commentary from Bob Berwick. REFERENCES Barton, E. (1986). "Computational Complexity in Two- Level Morphology," ACL-86 proceedings (this volume). Gajek, O., H. Beck, D. Elder, and G. Whittemore (1983). "LISP Implementation [of the KIMMO system]," Texas Linguistic Forum 22:187-202. Halle, M. (1985). "Speculations about the Representa- tion of Words in Memory," in V. Fromkin, ed., Pho- netic Linguistics: Essays in Honor of Peter Ladefoged, pp. 101-114. New York: Academic Press. Karttunen, L. (1983). "KIMMO: A Two-Level Morpho- logical Analyzer," Tezas Linguistic Forum 22:165-186. Nash, D. (1980). Topics in Warlpiri Grammar. Ph.D. the- sis, Department of Linguistics and Philosophy, M.I.T., Cambridge, Mass. Winston, P. (1984). Artificial Intelligence, second edition. Reading, Mass.: Addison-Wesley. 52
1986
8
COMPUTATIONAL COMPLEXITY IN TWO-LEVEL MORPHOLOGY G. Edward Barton, Jr. M.I.T. Artificial Intelligence Laboratory 545 Technology Square Cambridge, MA 02139 ABSTRACT Morphological analysis must take into account the spelling-change processes of a language as well as its possi- ble configurations of stems, affixes, and inflectional mark- ings. The computational difficulty of the task can be clari- fied by investigating specific models of morphological pro- cessing. The use of finite-state machinery in the "two- level" model by Kimmo Koskenniemi gives it the appear- ance of computational efficiency, but closer examination shows the model does not guarantee efficient processing. Reductions of the satisfiability problem show that finding the proper lexical/surface correspondence in a two-level generation or recognition problem can be computationally difficult. The difficulty increases if unrestricted deletions (null characters) are allowed. INTRODUCTION The "dictionary lookup" stage in a natural-language system can involve much more than simple retrieval. In- flectional endings, prefixes, suffixes, spelling-change pro- cesses, reduplication, non-concatenative morphology, and clitics may cause familiar words to show up in heavily dis- guised form, requiring substantial morphological analysis. Superficially, it seems that word recognition might poten- tially be complicated and difficult. This paper examines the question more formally by in- vestigating the computational characteristics of the "two- level" model of morphological processes. Given the kinds of constraints that can be encoded in two-level systems, how difficult could it be to translate between lexical and surface forms? Although the use of finite-state machin- ery in the two-level model gives it the appearance of com- putational efficiency, the model itself does not guarantee efficient processing. Taking the Kimmo system (Kart- tunen, 1983) for concreteness, it will be shown that the general problem of mapping between ]exical and surface forms in two-level systems is computationally difficult in the worst case; extensive backtracking is possible. If null characters are excluded, the generation and recognition problems are NP-complete in the worst case. If null charac- ters are completely unrestricted, the problems is PSPACE- complete, thus probably even harder. The fundamental difficulty of the problems does not seem to be a precompi- lation effect. In addition to knowing the stems, affixes, and co- occurrence restrictions of a language, a successful morpho- logical analyzer must take into account the spelling-change processes that often accompany affixation. In English, the program must expect love+ing to appear as loving, fly+s as flies, lie+ing as lying, and big+er as bigger. Its knowledge must be sufficiently sophisticated to distin- guish such surface forms as hopped and hoped. Cross- linguistically, spelllng-change processes may span either a limited or a more extended range of characters, and the material that triggers a change may occur either before or after the character that is affected. (Reduplication, a com- plex copying process that may also be found, will not be considered here.) The Kimmo system described by Karttunen (1983} is attractive for putting morphological knowledge to use in processing. Kimmo is an implementation of the "two-level" model of morphology that Kimmo Koskenniemi proposed and developed in his Ph.D. thesis. I A system of lexicons in the dictionary component regulates the sequence of roots and affixes at the lexical level, while several finite-state transducers in the automaton component -- ~ 20 transduc- ers for Finnish, for instance -- mediate the correspondence between lexical and surface forms. Null characters allow the automata to handle insertion and deletion processes. The overall system can be used either for generation or for recognition. The finite-state transducers of the automaton compo- nent serve to implement spelling changes, which may be triggered by either left or right context and which may ignore irrelevant intervening characters. As an example, the following automaton describes a simplified "Y-change" process that changes y to i before suffix es: IUniversity of Helsinki, Finland, circa Fall 1983. 53 "Y-Change" 5 5 y y * s = (lexicalcharacters) i y = s = (surface characters) state 1: 2 4 1 1 1 (normal state) state 2. 0 0 3 0 0 (require *s) state 3. 0 0 0 1 0 (require s) state 4: 2 4 8 1 1 (forbid+s) state S: 2 4 1 0 1 (forbids) The details of this notation will not be explained here; basic familiarity with the Kimmo system is assumed. For further introduction, see Barton (1985), Karttunen (1983), and references cited therein. THE SEEDS OF COMPLEXITY At first glance, the finite-state machines of the two- level model appear to promise unfailing computational ef- ficiency. Both recognition and generation are built on the simple process of stepping the machines through the input. Lexical lookup is also fast, interleaved character by charac- ter with the quick left-to-right steps of the automata. The fundamental efficiency of finite-state machines promises to make the speed of Kimmo processing largely independent of the nature of the constraints that the automata encode: The most important technical feature of Kosken- niemi's and our implementation of the Two-level model is that morphological rules are represented in the processor as automata, more specifically, as finite state transducers .... One important conse- quence of compiling [the grammar rules into au- tomata] is that the complexity of the linguistic de- scription of a language has no significant effect on the speed at which the forms of that language can be recognized or generated. This is due to the fact that finite state machines are very fast to operate because of their simplicity .... Although Finnish, for example, is morphologically a much more com- plicated language than English, there is no differ- ence of the same magnitude in the processing times for the two languages .... [This fact] has some psy- cholinguistie interest because of the common sense observation that we talk about "simple" and "com- plex" languages but not about "fast" and "slow" ones. (Karttunen, 1983:166f) For this kind of interest in the model to be sustained, it must be the model itself that wipes out processing diffi- culty, rather than some accidental property of the encoded morphological constraints. Examined in detail, the runtime complexity of Kimmo processing can be traced to three main sources. The rec- ognizer and generator must both run the finite-state ma- chines of the automaton component; in addition, the recog- nizer must descend the letter trees that make up a lexicon. The recognizer must also decide which suffix lexicon to ex- plore at the end of an entry. Finally, both the recognizer and the generator must discover the correct lexical-surface correspondence. All these aspects of runtime processing are apparent in traces of implemented Kimmo recognition, for instance when the recognizer analyzes the English surface form spiel (in 61 steps) according to Karttunen and Witten- burg's (1983) analysis (Figure 1). The stepping of trans- ducers and letter-trees is ubiquitous. The search for the lexical-surface correspondence is also clearly displayed; for example, before backtracking to discover the correct lexi- cal entry spiel, the recognizer considers the lexical string spy+ with y surfacing as i and + as e. Finally, after finding the putative root spy the recognizer must decide whether to search the lexicon I that contains the zero verbal ending of the present indicative, the lexicon AG storing the agen- tive suffix *er, or one of several other lexicons inhabited by inflectional endings such as +ed. The finite-state framework makes it easy to step the automata; the letter-trees are likewise computationally well-behaved. It is more troublesome to navigate through the lexicons of the dictionary component, and the cur- rent implementation spends considerable time wandering about. However, changing the implementation of the dic- tionary component can sharply reduce this source of com- plexity; a merged dictionary with bit-vectors reduces the number of choices among alternative lexicons by allowing several to be searched at once (Barton, 1985). More ominous with respect to worst-case behavior is the backtracking that results from local ambiguity in the construction of the lexical-surface correspondence. Even if only one possibility is globally compatible with the con- straints imposed by the lexicon and the automata, there may not be enough evidence at every point in processing to choose the correct lexical-surface pair. Search behavior results. In English examples, misguided search subtrees are necessarily shallow because the relevant spelling-change processes are local in character. Since long-distance har- mony processes are also possible, there can potentially be a long interval before the acceptability of a lexical-surfaee pair is ultimately determined. For instance, when vowel alternations within a verb stem are conditioned by the oc- currence of particular tense suffixes, the recognizer must sometimes see the end of the word before making final de- cisions about the stem. 54 Recognizing surface form "spiel". 1 s 1.4.1.2.1.1 2 sp 1.1.1.2.1.1 3 spy 1.3.4.3.1.1 4 "spy" ends, new lelXlCOn N 5 "0" ends. new lexicon C1 6 spy XXX extra input 7 (5) spy+ 1.5.16.4.1.1 8 spy+ XXX 9 (5) spy + 1.8.1.4.1.1 10 spy+ XXX 11 (4) "spy" ends, new lextcon 1 12 spy XXX extra tnput 13 (4) "spy" ends, new lexicon P3 14 spy+ 1.6.1.4.1.1 15 spy+ XXX 16 (14) spy+ 1,8.18.4.1.1 17 spy+ XXX 18 (4) "spy" ends, new lextcon PS 19 spy+ 1.6.1.4.1.1 20 spy+e 1.1.1.1.4.1 Zl spy+e XXX 22 (20) spy÷e 1.1.4.1.3.1 23 spy+e XXX 24 (19) spy+ 1.8.16.4.1.1 25 spy+e XXX Epenthesls 26 (4) "spy" ends, new lexicon PP 27 spy+ 1.6.1.4.1.1 28 spy+e 1.1.1.1.4.1 zg spy+e XXX 30 (28) spy+e 1.1.4.1.3.1 31 spy+e XXX 32 (27) spy+ 1.8.18.4.1.1 33 spy+e XXX Epenthests 34 (4) "spy" ends. new lexicon PR 35 spy+ 1.6.1.4,1,1 36 spy+ XXX 37 (38) spy+ 1.8.16.4.1.1 38 spy+ XXX 39 (4) "spy" ends. new lextcon AG 40 spy+ 1.6.1.4.1. I 41 spy+e 1.1.1.1.4.1 42 spy+e XXX 43 (41) spy+e 1.1.4,1.3,1 44 spy+e XXX 45 (40) spy+ 1.8.16.4.1.1 46 spy+e XXX Epenthests 47 (4) "spy" ends. new lextcon AB 48 spy+ 1,8.1.4.1.1 49 spy+ XXX 50 (48) Spy+ 1,5.18.4.1.1 51 spy÷ XXX 52 (3) spt 1.1.4.1.2.8 53 spte 1.1.16.1.6.1 54 spte XXX 58 (53) sple 1.1.16.1.5.6 56 spiel 1.1.16.2, I. I 57 "spiel" ends. new lextcon N 58 "0" ends. new lexicon Cl 59 "spiel" *** result 60 (58) spiel+ 1.1.18.1.1.1 61 spiel+ XXX "--+--'+---+ILL+LLL+III+ -~-+xxx+ l ---+XXX+ LLL+]H+ I LLL÷---+XXX÷ -~-+XXX+ LLL+---+-*-+XXX+ _l_+xxx÷ -o-+AAA+ LLL+---+---+XXX+ !:i:: ,x,. LLL+---+XXX+ -!-+XXX+ LLL+---+---÷XXX+ I ---÷XlX+ ---+---+XXX+ I ---+---+LLL+LLL+**-÷ I ---+XXX+ Key to tree nodes: --- normal treversal LLL new lexicon AAA blocking by automata XXX no lexlcal-surface pairs compatible with surface char and dictionary III blocking by leftover input *'* analysis found (("spiel" (N SG))) Figure ]: These traces show the steps that the KIMMOrecognizer for English goes through while analyzing the surface form spiel. Each llne of the table oil the left shows the le]dcal string and automaton states at the end of a step. If some autoz,mton blocked, the automaton states axe replaced by ~, XXI entry. An XXX entry with no autonmto,, n:une indicates that the ]exical string could not bc extended becau,~e the surface c],aracter .'tnd h,xical letter tree together ruh'd out ,-dl feasible p,'drs. After xn XXX or *** entry, the recognizer backtracks and picks up from a previous choice point. indicated by the paxenthesized step l*lU,zl)er before the lexical .~tring. The tree Ol, the right depicts the search graphically, reading from left to right and top t. ])ottoln with vertir;d b;trs linking the choices at each choice point The flhntres were generated witl, a ](IM M() hnplen*entation written i, an ;I.llgll*t,llter| version of MACI,ISI'I,t,sed initiMly on Kltrttllnel*',,¢ (1983:182ff) ;dgorithni description; the diction;n'y m.l antomaton contpouents for E,glish were taken front 1.;artt,ne, and Wittenlmrg (1983) with minor ('llikllgCS. This iJz*ple*l*Vl*tatio*) se;u'¢h(.s del.th-tlr,~t a,s Kmttu,en's does, but explores the Mternatives at a giwm depth in a different order from Karttttnen's. 55 • I Ignoring the problem of choosing among alternative lexicons, it is easy to see that the use of finite-state ma- chinery helps control only one of the two remaining sources of complexity. Stepping the automata should be fast, but the finite-state framework does not guarantee speed in the task of guessing the correct lexical-surface correspondence. The search required to find the correspondence may pre- dominate. In fact, the Kimmo recognition and generation problems bear an uncomfortable resemblance to problems in the computational class NP. Informally, problems in NP have solutions that may be hard to guess but are easy to verify -- just the situation that might hold in the discov- ery of a Kimmo lexical-surface correspondence, since the automata can verify an acceptable correspondence quickly but may need search to discover one. THE COMPLEXITY OF TWO-LEVEL MORPHOLOGY The Kimmo algorithms contain the seeds of complex- ity, for local evidence does not always show how to con- struct a lexical-surface correspondence that will satisfy the constraints expressed in a set of two-level automata. These seeds can be exploited in mathematical reductions to show that two-level automata can describe computa- tionally difficult problems in a very natural way. It fol- lows that the finite-state two-level framework itself cannot guarantee computational efficiency. If the words of natural languages are easy to analyze, the efficiency of processing must result from some additional property that natural languages have, beyond those that are captured in the two- level model. Otherwise, computationally difficult problems might turn up in the two-level automata for some natural language, just as they do in the artificially constructed lan- guages here. In fact, the reductions are abstractly modeled on the Kimmo treatment of harmony processes and other long-distance dependencies in natural languages. The reductions use the computationally difficult Boolean satisfiability problems SAT and 3SAT, which in- volve deciding whether a CNF formula has a satisfying truth-assignment. It is easy to encode an arbitrary SAT problem as a Kimmo generation problem, hence the gen- eral problem of mapping from lexical to surface forms in Kimmo systems is NP-complete. 2 Given a CNF formula ~, first construct a string o by notational translation: use a minus sign for negation, a comma for conjunction, and no explicit operator for disjunction. Then the o corresponding to the formula (~ v y)&(~ v z)&(x v y v z) is -xy.-yz .xyz. 2Membership in NP is also required for this conclusion. A later section ("The Effect of Nulls ~) shows membership in NP by sketching how a nondeterministic machine could quickly solve Kimmo generation and recognition problems. The notation is unambiguous without parentheses because is required to be in CNF. Second, construct a Kimmo automaton component A in three parts. (A varies from formula to formula only when the formulas involve differ- ent sets of variables.) The alphabet specification should list the variables in a together with the special characters T, F, minus sign, and comma; the equals sign should be declared as the Kimmo wildcard character, as usual. The consistency automata, one for each variable in a, should be constructed on the following model: "x-consistency" 3 3 x x = (lezical characters) T F = (surface characters} 1: 2 3 1 (x undecided} 2: 2 0 2 (x true} 3: 0 3 3 (xfalsc} The consistency automaton for variable x constrains the mapping from variables in the lexical string to truth-values in the surface string, ensuring that whatever value is as- signed to x in one occurrence must be assigned to x in every occurrence. Finally, use the following satisfaction automaton, which does not vary from formula to formula: "satisfaction" 3 4 = = , (lexical characters} T F , (surface characters} 1. 2 1 3 0 (no true seen in this group) 2: 2 2 2 1 (true seen in this group} 3. 1 2 0 0 (-F counts as true) The satisfaction automaton determines whether the truth- values assigned to the variables cause the formula to come out true. Since the formula is in CNF, the requirement is that the groups between commas must all contain at least one true value. The net result of the constraints imposed by the consis- tency and satisfaction automata is that some surface string can be generated from a just in case the original formula has a satisfying truth-assignment. Furthermore, A and o can be constructed in time polynomial in the length of ~; thus SAT is polynomial-time reduced to the Kimmo gener- ation problem, and the general case of Kimmo generation is at least as hard as SAT. Incidentally, note that it is local rather than global ambiguity that causes trouble; the gen- erator system in the reduction can go through quite a bit of search even when there is just one final answer. Figure 2 traces the operation of the Kimmo generation algorithm on a (uniquely) satisfiable formula. Like the generator, the Kimmo recognizer can also be used to solve computationally difficult problems. One easy reduction treats 3SAT rather than SAT, uses negated al- phabet symbols instead of a negation sign, and replaces the satisfaction automaton with constraints from the dic- tionary component; see Barton (1985) for details. 56 Generating from lexical form "-xy. -yz. -y-z,xyz" 1 1,1.1,3 38 + 2 3 4 5 6 7 + 8 g 10 ll 12 + 13 14 15 + 16 17 18 + lg 20 + 21 22 + 23 24 (8) 25 26 27 28 + 29 30 31 + 32 33 34 + 35 36 + 37 -F -FF -FF, -FF, - -FF, -T -FF, -F -FF, -FF -FF, -FF. -FF, -FF, -FF, -FF, -FF, -FF, -FF -FF, -FF -FF, -FF -FF, -FF -FF, -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF 3,1,1,2 3g 3.3,1,2 40 (3) 3,3,1.1 41 3,3,1,3 42 XXX y-con. 43 3,3,1,2 44 + 3,3,3,2 45 3,3.3,1 46 - 3,3,3,3 47 (45) -T XXX y-con. 48 -F 3,3,3,2 49 -F- 3,3,3,2 50 -F-T XXX z-con. 51 + -F-F 3,3,3,2 52 -F-F, 3,3.3,1 53 -FF,-F-F,T XXX x-con. 54 + -FF,-F-F,F 3,3,3,1 55 -FF,-F-F,FT XXX y-con. 56 (2) -FF, -F-F,FF 3,3,3,1 57 -FF,-F-F,FFT XXX z-con. 58 -FF,-F-F,FFF 3,3,3,1 5g (57) -FF,-F-F,FFF XXX satis, nf. 60 -FT 3,3,2,2 61 -FT, 3,3,2,1 62 -FT,- 3,3,2,3 63 + -FT,-T XXX y-con. 64 -FT,-F 3,3,2,2 65 -FT,-F- 3,3,2,2 66 (64) -FT,-F-F XXX z-con. 67 -FT,-F-T 3,3,2,2 68 -FT,-F-T. 3,3,2,1 6g -FT,-F-T,T XXX x-con. 70 + -FT,-F-T,F 3,3,2,1 71 -FT,-F-T,FT XXX y-con. 72 -FT,-F-T,FF 3,3,2,1 73 + -FT,-F-T,FFF XXX z-con. 74 -FF,-FT,-F-T,FFT 3,3,2,2 "-FF,-FT,-F-T,FFT" *** result -FT -FT, -FT, - -FT, -F -FT, -T -FT -TF -FT -TF, -FT -TT -FT -TT, -FT -TT, - -FT -TT, -F -FT -TT, -T -FT -TT,-T- -FT -TT,-T-F -FT -TT,-T-T -FT -TT,-T-T, -T -TF -TF, -TT -TT -TT - -TT -F -TT -T -TT -TF -TT -TF, -TT -TT -TT -TT. -TT -TT, - -TT -TT, -F -TT -TT, -T -TT -TT,-T- -TT -TT,-T-F -TT -TT.-T-T -TT -TT,-T-T, 3,2,1,2 3,2,1,1 3,2,1,3 XXX y-con. 3.2,1,1 3,2,3,1 XXX satis. 3,2,2,2 3,2,2,1 3,2,2,3 XXX y-con. 3,2,2,1 3,2,2.3 XXX z-con. 3,2,2,1 XXX saris. 2,1,1,1 2,3,1,1 XXX saris. 2,2,1,2 2,2,1,1 2,2,1,3 XXX y-con. 2,2,1,1 2,2,3,1 XXX saris. 2,2,2,2 2,2,2,1 2,2,2,3 XXX y-con. 2,2,2,1 2,2.2,3 XXXz-eon. 2,2,2,1 XXX satis. ("-FF,-FT,-F-T, FFT" ) Figure 2: The generator system for deciding the satisfiability of Boolean formulas in x, y, and z goes through these steps when applied to the encoded version of the (satisfiable) formula (5 V y)&(~ V z)&(~ V ~)&(z V y V z). Though only one truth-assignment will satisfy the formula, it takes quite a bit of backtracking to find it. The notation used here for describing generator actions is similar to that used to describe recognizer actions in Figure ??, but a surface rather than a lexical string is the goal. A *-entry in the backtracking column indicates backtracking from an immediate failure in the preceding step, which does not require the full backtracking mechanism to be invoked. THE EFFECT OF PRECOMPILATION Since the above reductions require both the lan- guage description and the input string to vary with the SAT/3SAT problem to be solved, there arises the question of whether some computationally intensive form of pre- compilation could blunt the force of the reduction, paying a large compilation cost once and allowing Kimmo run- time for a fixed grammar to be uniformly fast thereafter. This section considers four aspects of the precompilation question. First, the external description of a Kimmo automator or lexicon is not the same as the form used at runtime. In- stead, the external descriptions are converted to internal forms: RMACHINE and GMACHINE forms for automata, letter trees for lexicons (Gajek et al., 1983). Hence the complexity implied by the reduction might actually apply to the construction of these internal forms; the complexity of the generation problem (for instance) might be concen- trated in the construction of the "feasible-pair list" and the GMACHINE. This possibility can be disposed of by reformulating the reduction so that the formal problems and the construction specify machines in terms of their in- ternal forms rather than their external descriptions. The GMACHINEs for the class of machines created in the con- struction have a regular structure, and it is easy to build them directly instead of building descriptions in external" format. As traces of recognizer operation suggest, it is runtime processing that makes translated SAT problems difficult for a Kimmo system to solve. Second, there is another kind of preprocessing that might be expected to help. It is possible to compile a set of Kimmo automata into a single large automaton (a BIGMACHINE) that will run faster than the original set. The system will usually run faster with one large automa- ton than with several small ones, since it has only one machine to step and the speed of stepping a machine is largely independent of its size. Since it can take exponen- tial time to build the BIGMACHINE for a translated SAT problem, the reduction formally allows the possibility that BIGMACHINE precompilation could make runtime pro- 57 cessing uniformly efficient. However, an expensive BIG- MACH]NE precompilation step does not help runtime pro- cessing enough to change the fundamental complexity of the algorithms. Recall that the main ingredients of Kimmo runtime complexity are the mechanical operation of the automata, the difficulty of finding the right lexical-surface correspondence, and the necessity of choosing among alter- native lexicons. BIGMACHINE precompilation will speed up the mechanical operation of the automata, but it will not help in the difficult task of deciding which lexical- surface pair will be globally acceptable. Precompilation oils the machinery, but accomplishes no radical changes. Third, BIGMACHINE precompilation also sheds light on another precompilation question. Though B]GMA- CHINE precompilation involves exponential blowup in the worst case (for example, with the SAT automata), in prac- tice the size of the BIGMACHINE varies -- thus naturally raising the question of what distinguishes the "explosive" sets of automata from those with more civilized behav- ior. It is sometimes suggested that the degree of inter- action among constraints determines the amount of BIG- MACHINE blowup. Since the computational difficulty of SAT problems results in large measure from their "global" character, the size of the BIGMACHINE for the SAT sys- tem comes as no surprise under the interaction theory. However, a slight change in the SAT automata demon- strates that BIGMACHINE size is not a good measure of interaction among constraints. Eliminate the satisfac- tion automaton from the generator system, leaving only the consistency automata for the variables. Then the sys- tem will not search for a satisfying truth-assignment, but merely for one that is internally consistent. This change entirely eliminates interactions among the automata; yet the BIGMACHINE must still be exponentially larger than the collection of individual automata, for its states must distinguish all the possible truth-assignments to the vari- ables in order to enforce consistency. In fact, the lack of interactions can actually increase the size of the BIGMA- CHINE, since interactions constrain the set of reachable state-combinations. Finally, it is worth considering whether the nondeter- minism involved in constructing the lexical-surface cor- respondence can be removed by standard determiniza- tion techniques. Every nondeterministic finite-state ma- chine has a deterministic counterpart that is equivalent in the weak sense that it accepts the same language; aren't Kimmo automata just ordinary finite-state machines op- erating over an alphabet that consists of pairs of ordinary characters? Ignoring subtleties associated with null char- acters, Kimmo automata can indeed be viewed in this way when they are used to verify or reject hypothesized pairs of lexical and surface strings. However, in this use they do not need determinizing, for each cell of an automaton descrip- tion already lists just one state. In the cases of primary interest -- generation and recognition -- the machines are used as genuine transducers rather than acceptors. The determinizing algorithms that apply to finite-state acceptors will not work on transducers, and in fact many finite-state transducers are not determinizable at all. Upon seeing the first occurrence of a variable in a SAT problem, a deterministic transducer cannot know in general whether to output T or F. It also cannot wait and output a truth- value later, since the variable might occur an unbounded number of times before there was sufficient evidence to assign the truth-value. A finite-state transducer would not be able in general to remember how many outputs had been deferred. THE EFFECT OF NULLS Since Kimmo systems can encode NP-complete prob- lems, the general Kimmo generation and recognition prob- lems are at least as hard as the difficult problems in NP. But could they be even harder? The answer depends on whether null characters are allowed. If nulls are completely forbidden, the problems are in NP, hence (given the pre- vious result) NP-complete. If nulls are completely unre- stricted, the problems are PSPACE-complete, thus prob- ably even harder than. the problems in NP. However, the full power of unrestricted null characters is not needed for linguistically relevant processing. If null characters are disallowed, the generation prob- lem for Kimmo systems can be solved quickly on a nonde- terministic machine. Given a set of automata and a lex- ical string, the basic nondeterminism of the machine can be used to guess the lexical-surface correspondence, which the automata can then quickly verify. Since nulls are not permitted, the size of the guess cannot get out of hand; the lexical and surface strings will have the same length. The recognition problem can be solved in the same way except that the machine must also guess a path through the dictionary. If null characters are completely unrestricted, the above argument fails; the lexical and surface strings may differ so radically in length that the lexical-surface cor- respondence cannot be proposed or verified in time poly- nomial in input length. The problem becomes PSPACE- complete -- as hard as checking for a forced win from certain N x N Go configurations, for instance, and prob- ably even harder than NP-complete problems (cf. Garey and Johnson, 1979:171ff). The proof involves showing that Kimmo systems with unrestricted nulls can easily be in- duced to work out, in the space between two input char- acters, a solution to the difficult Finite State Automata Intersection problem. 58 The PSPACE-completeness reduction shows that if two-level morphology is formally characterized in a way that leaves null characters completely unrestricted, it can be very hard for the recognizer to reconstruct the superfi- cially null characters that may lexically intervene between two surface characters. However, unrestricted nulls surely are not needed for linguistically relevant Kimmo systems. Processing complexity can be reduced by any restriction that prevents the number of nulls between surface charac- ters from getting too large. As a crude approximation to a reasonable constraint, the PSPACE-completeness reduc- tion could be ruled out by forbidding entire lexicon entries from being deleted on the surface. A suitable restriction would make the general Kimmo recognition problems only NP-complete. Both of the reductions remind us that problems involv- ing finite-state machines can be hard. Determining mem- bership in a finite-state language may be easy, but using finite-state machines for different tasks such as parsing or transduction can lead to problems that are computation- ally more difficult. REFERENCES Barton, E. (1985). "The Computational Complexity of Two-Level Morphology," A.I. Memo No. 856, M.I.T. Artificial Intelligence Laboratory, Cambridge, Mass. Gajek, O., H. Beck, D. Elder, and G. Whittemore (1983). "LISP Implementation [of the KIMMO system]," Texas Linguistic Forum 22:187-202. Garey, M., and D. Johnson (1979). Computers and In- tractability. San Francisco: W. H. Freeman and Co. Karttunen, L. (1983). "KIMMO: A Two-Level Morpho- • logical Analyzer," Texas Linguistic Forum 22:165-186. Karttunen, L., and K. Wittenburg (1983). "A Two-Level Morphological Analysis of English," Texas Linguistic Forum 22:217-228. ACKNOWLEDGEMENTS This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intel- ligence research has been provided in part by the Ad- vanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014- 80-C-0505. A version of this paper was presented to the Workshop on Finite-State Morphology, Center for the Study of Language and Information, Stanford University, July 29-30, 1985; the author is grateful to Lauri Kart- tunen for making that presentation possible. This research has benefited from guidance and commentary from Bob Berwick, and Bonnie Dorr and Eric Grimson have also helped improve the paper. 59
1986
9
TEMPORAL ONTOLOGY IN NATURAL LANGUAGE s Marc Moen-~ and Mark Steedman t Centre for Cognitive Science *t and Dept. of AT t, Univ. ~ Edinb-rgh, and Dept. of Computer and Information Science, Univ. of Pennsylvania t ABSTRACT A semantics of linguistic categories like tense, aspect, and cer- tain temporal adverbials, and a theory of their use in defining the temporal relations of events, both require a more complex structure on the domain underlying the meaning representa- tions than is commonly assumed. The paper proposes an ontology based on such notions as causation and consequence, rather than on purely temporal primitives. We claim that any manageable logic or other formal system for natural language temporal descriptions will have to embody such an ontology, as will any usable temporal database for knowledge about events which is to be interrogated using natural language. 1. Introduction It has usually been assumed that the semantics of temporal expressions is directly related to the linear dimensional con- caption of time familiar from high-school physics - that is, to a model based on the number-line. However, there are good reasons for suspecting that such a conception is not the one that our linguistic categories are most directly related to. When-clauses provide an example of the mismatch between linguistic temporal categories and a semantics based on such an assumption. Consider the following examples: (1) When they built the 39th Street bridge... (a) ...a local architect drew up the plans. Co) ...they used the best materials. (c) ...they solved most of their traffic problems. To map the temporal relations expressed in these examples onto linear time, and to try to express the semantics of when in terms of points or intervals (poss~ly associated with events), would appear to imply either that when is multiply ambiguous, allowing these points or intervals to be temporally related in at least three different ways, or that the relation expressed between main and when-clauses is one of "approximate coin- cidence". However, neither of these tactics explains the pecu- liarity of utterances like the following: (2) #When my car broke down, the sun set. The oddity of this statement seems to arise because the when- clause predicates something more than mere temporal coin- cidence, that is, some contingent relation such as a causa/link between the two events. Of course, our knowledge of the world does not easily support such a link. This aspect of the sentence's meaning must stem from the sense-meaning of when, because parallel utterances using just after, at approxi. mate/y the same t/me as, and the like, which predicate purely temporal coincidence, are perfectly felicitous. We shall claim that the different temporal relations conveyed in examples (1)do not arise from any sense-ambiguity of when, or from any "fuzziness" in the relation that it expresses between the times refered to in the clauses it conjoins, but from the fact that the meaning of when is not primarily temporal at an. We shall argue that when has single sense-meaning reflecting its role of establishing a temporal focus. The apparent diversity of meanings arises from the nature of this referent and the organisation of events and states of affairs in episodic memory under a relation we shall call "contingency", a term related to such notions as causality, rather than temporal sequenfiality. This contingent, non-temporal relation also determines the ontology of the elementary propositions denot- ing events and states of wl~ch episodic memory is composed, and it is to these that we turn first. 2. Temporal and Aspectual Categories Utterances of English sentences can, following Vendler, be classified into temporal/aspecmal types on the basis of the tenses, aspects and adverbials with which they can cooccur (cf. Dowry, 1979). This "espocmal type" refers to the relation to other happenings in the discourse that a speaker predicates of the particular happening that their utterance describes. Thus an utterance of Harry reached the top is usually typical of what we will call a "culmination" -- informally, an event which the speaker views as accompanied by a transition to a new state of the world. This new state we will refer to as the "consequent state" of the event. Harry hiccupped is not usually viewed by speakers as leading to any specific change of state. It typifies what we call "point" expressions, that is punctual events whose consequences are not at issue. Similarly, Harry climbed typifies what we will call for obvious reasons a "process": such utterances describe an event as extended in time but not characterised by any particular con- clusion or culmination. In contrast. Harry climbed to the top typically describes a state of affah~ that also extends in time but that does have a particular culmination associated with it at which a change of state takes place. We classify such an utter- ance as a "culminated process". Finally, Harry is at the top typically describes a state. Thus we can interpret Vendler as saying that a part of the meaning of any utterance of a sentence is one of a small number of temporal/aapectual profiles distinguished on a small number of dimensions. They can be summarized as in Figure 1. It is important to be clear that this claim concerns sentences used in a conte~, sense-mearLings of sentences or verbs in iso- lation are usually compatible with several (or even all possible) Vendlerian profiles, as Dowry and Verkuyl have pointed out - hence the frequent use of the word "typically" above. The details of this taxonomy and the criteria according to which utterances can be categorised are therefore less important than 1 Readen familiar with Vendler's work will realise that we have changed his terminology. We have done so both for notational convenience and to avoid the considerable confusion that has arisen concerning the precise meaning of the old terms. +conseq -conseq EVENTS STATES atomic extended Harry left early At six, John arrived Sandra hi.upped Paul winked Sue built a sandcastle The ice melted completely Max worked in the garden Alice played the piano John knows French He was in the kitchen Hg~el the observation that each pri~tive entity of a given type, such as the culmination-event of Har~'s reaching the top, carries intimations of other associated events and states, such as the process by which the culmination was achieved, and the conse- quent state that followed. What linguistic devices like tenses, aspects, and temporal/aspectual adverbials appear to do is to transform entities of one type into these other "contingently" related entities, or to turn them into composites with those related entities. The temporal/aspecmal ontology that underlies these phenomena can be defined in terms of the state-transition net- work shown in Figure 2. The semantics of tenses, as-pecmal auxiliaries and temporal adverbials is defined in terms of func- tions which map categories onto other categories, and having the important characteristic of "coez~ing" their argument to be of the appropriate type. Both the possibilities for coercing an input proposition, and the possibilities for the output category. are defined by the transition net. In addition, the felicity of a particular transition is conditional on support from knowledge and context. Consider, for example, the combination of a culminated pro- cess expression with a for-adverbial, as in (3) Sue played the sonata for a few minutes. A for.adverbial coerces its input to be of the process variety. According to the network in Figure 2, such a transition is feli- citoos if the culmination point associated with the event of p/ay/ng the sonata is "stripped off'. As a result, there is no implication in (3) that Suefln/shed playing the sonata. Another routz through the network is possible in order to account for examples with for-adverbials: the culminated pro- cess, like any other event, can be viewed as an unstructured "point". A transition to turn it into a process then results in an iteration of occurrences at which Sue plays the sonata. This route through the network seems to be ruled out for (3) because it finds no support in our knowledge about sonatas and about how long they typically last. It does result, however, in a likely interpretation for a sentence ~ke (4) Sue played the sonata for about eight hours. EVENTS J I atomic [ extended [ I I J +conseq. [ ~ CULMINATION CULMINATED consequences II A ~ / PRoczss. t 1/ /f _.o.s.,.-/J J/ t" POINT .,~..,.___~....~ PROCESS ~--eration} ~ (in progress) J J I " Figure 2 Not all aspecmal/temporal adverbials expressing a time span have the same functional type. /n-adverbials, for example, coerce their input to be a culminated process expression. This means that combination with a culmination expression requires the transition to be made to the culminated process node. According to the aspe~mal network in Figure 2 this transition is a felicitous one ff the context allows a preparatmy process to be associated with the culmination, as in (5): (5) Laura reached the top in two hours. The in-adverbial then defines the length of this preparatory period. Since the arcs describe what the world has to be like for transi- tions m be made felicitously, it is obvious that there are expres- sions that will resist certain changes. For example, it will be hard to find a context in which an in-adverbial can be com- bined with a culmination expression Ytke Harry accideraally spilled his coffee, since it is hard to imagine a context in which a preparatory process can be associated with an involuntary act. A similar problem arises in connection with the following example: (6) John ran in a few minutes The process expression John ran has to be changed into a cul- minated process expression before combination with the in- adverbial is possible. One way in which the network in Figure 2 will permit the change from a process to a culminated pro- cess is ff the context allows a culmination point m be associ- ated with the process itself. General world knowledge makes this rather hard for a sentence like John ran, except in the case where John habitually runs a particular distance, such as a measured mile. If the in-adverbial had conveyed a specific duration, such as in four minutes, then the analysis would make sense, as Dowty has poimed out. However, the unspecific in a few minutes continues to resist this interpretation. However, another route is also possible for (6): the process of John running can be made into an atomic point' and thence into a culmination in its own right. This culmination can then acquire a preparatory process of its own -- which we can think of as preparing to run -. to become the culminated process which the adverbial requires. This time, there is no conflict with the content of the adverbial, so this reading is the most accessible of the two. Progressive auxiliaries coerce their input to be a process expression. The result of the application is a progressive state, which describes the process as being in progress. This means that, when a culmination expression like reach the top is used with a progressive, a transition path has to be found from the culmination node to the process node. According to the transi- tion network, this involves first adding a preparatory process to the culmination, and then stripping off the culmination point. As a result, the progressive sentence only describes the preparation as ongoing and no longer asserts that the original culmination even occurred. There would be no contradiction in continuing (7) Harry was reaching the top asin (8) Harry was roaching the top but he slipped and fell before he got there. • As Moens & Steedman (1986) point out, the fact that accord- ing to the present theory, progressives coerce their input to be a process, so that any associated culmination is su'ipped away and no longer conuibutes to truth conditions, provides a reso- lution of the "imperfective paradox" (Dowry 1979), without appealing to theory-external consmaets like "inertia worlds". A porfect, as in (9) Harry has reached the top refers to the consequent state of the culminetion. It requires its input category to be either a culmination or a culminated pro- cess, and maps this expression into its consequent state. Infor- mal evidence that it does so can be obtained by noticing that perf~ts are infelicitous if the salient consequences are not in force. The most obvious of these consequences for (9) is that Hahn still be at the top, although as usual there are other possi- bilities. Since the transition network includes loops, it will allow us to define indefinitely complex temporal/aspectoal categories, like ti~ one evoked by the following sentence: (10) It took me two days to play the "Minute Waltz" in less than sixty seconds for more than an hour. The culminated process expression play the Minute Waltz can combine sn'aightforwardly with the in-adverbial, indicating how long it takes to reach the culmination point of finishing playing the Mimae Waltz. Combination with the for.adverbial requires this expression to be mined into a process - the most obvious route through the network being that through the point node. The resulting culminated process expression describes the iterated process of playing the Minute Waltz in less than $ix~ seconds as lasting for more than an hour. The expression it took me..., finally, is like an in-adverbial in that it is looking for a culminated process expression to combine with. It finds one in the expression to play the Minute Waltz in less than sixty seconds for more than an hour but combination is ham- pered by the fact that there is a conflict in the length of time the adverbials describe. In the case of (10), the whole culminated process is instead viewed as a culmination in its own right (via the path through the point node). Knowledge concerning such musical feats then supplies an appropriate preparatory process which we can think of as practicising. The adverbial/t took me two days then defines the temporal extent of this prepara- tory process needed to reach the point at which repeatedly playing that piece of music so fast for such a considerable length of time became a newly acquired skilL This basic framework thus allows for a unified semantics of a wide variety of aspectual adverbials, the progressive, the per- fect, and iterative expressions in English. It is also used to explain the effect of bare plurals and certain varieties of nega- tion on the overall temporal structure of discourse (Moens forthcoming). All of the permissible transitions between aspectual categories illustrated in Figure 2 appear to be related to a single elemen- tsry contingency-based event structure which we call a "nucleus". A nucleus is defined as a structure comprising a culmination, an associated preparatory process, and a conse- quentstate, hcanberepresented~.orially as in~gure3. 2 preparatory process consequent state IIIIII/111111111111111111111111111111111111 I culmination Figure 3 Any or all of these elements may be compound: the prepara- tion may consist of a number of discrete steps, for example the stages of climbing, having lunch or whatever, leading to the culmination of reaching the top. The consequent state may also be compound. Most importantly, we shall see that it includes the further events, if any, that are in the same sequence of contingently related events as the culmination. Similarly, the culmination may itself be a complex event - such as the entire culminated process of climbing a mountain. (In this case, the associated preparatory process and conse- quant state will be quite different ones to those internal to the culminated process itself.) The device is intended to embody the proposal that when we hear about an event like climbing a moun~a/n in conjunction with some coercive aspectuzl category which forces it to undergo a transition, then the alter- natives that are available are: a) to decompose the core event into a nucleus and to make a transition to one of the componants, such as the prepuratory activity of climbing or to the consequent state of having climbed the mountain; or b) to treat the entire event as a culmination, to c(m~oose it into a nucleus with whatever preparation and conse- quences the context provides for the activity of climbing a mountain, and to make the transition to either one of those. We further claim that those are the on/y alternatives. The concept of a nucleus not only explains the Iransitions of Figure 2, but also provides an answer to the question raised in the introduction concerning the apparent vagaries in the mean- ing of when-clauses. 3. When-clauses The aspects and temporal/aspecmal adverbials considered above all act to modify or change the aspecmal class of the core proposition, subject to the limits imposed by the network in Figure 2, which we claim is in turn determined by the organ- isation of episodic memory. However. tenses and certain other varieties of adverbial adjuncts have a rather different character. Tense is widely regarded as an anaphoric category. requiring a previously established temporal referent. The referent for a present tense is usually the time of speec.~ but the referent for a past tense must be explicitly established. Such a referent is usually established using a second type of "temporal" adverbial, such as once upon a time. attire o'clock last Saturday, while I was cleaning my teeth, or when I woke up this morning. Most accounts of the anaphoric nature of tense have invoked Reichenbach's (1947) trinity of underlying times, and his con- cept of the "positional" use of the reference time which he 2 A similar ~ent structure is proposed by Passonneau (1987). called "R". Under these accounts (reviewed in Steedman, 1982), the adjuncts establish a reference time to which the reference lime of a main clause and subsequent same-tensed clauses may attach or refer, in much the same way that various species of full noun phrases establish referants for pronouns. However, in one respect, the past tense does not behave like a pronotm. Use of a pronoun such as "she" does not change the referent to which a subsequent use of the same pronoun may refer, whereas using a past tense may. In the following exam- ple. the teml~al reference point for sentence (b) seems to have moved on from the time established by the adjunct in (a): (II) a. At exactly five o'clock, Harry walked in. b. He sat down. This fact has caused theorists such as Dowry (1986), Hinrichs (1984) and Partee (1984) to stipulate that the ref~,~,ce time autonomously advances during a narrative. However. such a stipulation (besides creating problems for the theory vis-~i-v/s those narratives where reference time seems not to advance) seems to be unnecessmT, since the amount by which it advances still has to be determined by context. The concept of a nucleus that was invoked above to explain the varieties of aspecmal categories offers us exactly what we need to explain both the fact of the advance and its extent. We simply need to assume that a main clause event such as Harry walked in is interpreted as an entire nucleus, complete with consequent state, for by definition the consequent state includes whatever other events were contingent upon Harry walking in, including whatever he did next. Provided that the context (or the heerer's assumptions about the world) suppolls the idea that a subsequent main clause identifies this next contingent event, then it will provide the temporal referent for that main clause. In its ability to refer to entities that have not been explicitly mentioned, but whose existence has merely been implied by the presence of an entity that has been mentioned, tense appears more like a definite NP like the mus/c in the following example than like a p~o,~oun, as Webber (1987) points out. (12) I went to a party last night. The music was wonderful. A similar move is all that is required to explain the puzzle con- cerning the apparent ambiguity of when-clauses with which the paper began. A when-clause behaves rather like one of those phrases that are used to explicitly change topic, like and your father in the following example from Isard, (1975): (13) And your father, how is he? A when-clause introduces a novel temporal referent into focus whose unique identifiability in the bearer's memory is simi- larly presupposed. However, again the focussed temporal referent is an entire nucleus, and again an event main clause can attach itself anywhere within this structure that world knowledge will allow. For example, consider the example (1) with which we began (repeated here): (14) When they built the 39th Street bridge... (a) ...a local architect drew up the plans. Co) ...they used the best materials. (c) ...they solved most of their traffic problems. Once the core event of the when-clause has been identified in memory, the hearer has the same two alternatives described before: either it is decomposed into a preparatory process, a culmination and a consequent state, or the entire event is ueated as itself the culmination of another nucleus. Either 4 way, once the nucleus is established, the reference time of the main clause has to be situated somewhere within it - the exact location being determined by knowledge of the entities involved and the episode in question. So in example (a) the entire culminated process of building the bridge becomes a culmination (via a path in Figure 2 which passes through the "point" node) which is associated in a nucleus with prepara- tions for, and consequences of, the entire business, as in Figure 4: they prepare they have built to build the bridge IIIIIIIII/1111111 1 IIII///I//////////I I they build the bridge Figure 4 The drawing up of the plans is then, for reasons to do with knowledge of the world, situated in the preparatory phase. In example (b), in contrast, the building of the bridge is decomposed into a quite different preparatory process of build- ing, a quite different culmination of completing the bridge end some consequences which we take to be also subtly distinct f~rom those in the previous case, as in Figure 5. The use of the best materials is then, as in (a), situated in the preparatory pro- cess - but it is a different one this time. they build they have completed the bridge I/I///I///I//I/////////I/////////// I they complete the bridge Figure S Example (c) is like (a) in giving rise to the nucleus in Figure 4, but pragmatics demands that the main clause be situated some- where in the consequent state of building the bridge. Thus, a main clause event can potentially be situated anywhere along this nucleus, subject to support f3"om knowledge about the precise events involved, But example (2) is still strange, because it is so hard to think of any relation that is supported in this way: (15) #When my car broke down, the sun set The when-clause defines a nucleus, consisting of whatever pro- cess we can think of as leading up to the car's break-down, the break-down itself and its possible or actual consequences. It is not clear where along this nucleus the culmination of the sun set could be situated: it is not easy to imagine that it is a func- tional part of the preparatory process typically associated with a break-down, and it is similarly hard to imagine that it can be a part of the consequent state, so under most imaginable cir- cumstances, the utterance remains bizarre. The constraints when places on possible inteqa~etations of the relation between subordinate and main clause are therefore quite strong. First, general and specific knowledge about the event described in the when-clause has to support the associa- tion of a complete nucleus with it. Secondly. world knowledge also has to support the contingency relation between the events in subordinate end main clause. As a result, many constructed examples sound strange or are considered to be infelicitous, because too much context has to be imported to make sense of them. In all of the cases discussed so far, the main clause has been an event of some variety. When the main clause is stative, as in the following examples, the effect is much the same. That is to say, the when-clause establishes a nucleus, end the stative is asserted or ,-,ehed wherever world knowledge permits within the nucleus. The only difference is that statives are by definition unbounded with respect to the reference time that they are predicated of, end outlast it. It follows that they can usually fit in almost anywhere, end therefore tend not to coerce the when-clause, or to induce the causal/contingent interpreta- lions that we claim characteriso the corresponding sentences with events as main clauses: (16) When they built that bridge ..l was still a young lad. ...my grandfather had been dead for several years. ...my aunt was having an affair with the milkman. ...my father used to play squash. However, world knowledge may on occasion constrain the relation of a stative main clause, and force it to attach to or describe a situation holding over either the preparatory process or the consequent state of the subordinate clause, as in the fol- lowing examples (cf. Smith 1983): (17) When Harry broke Sue's vase, ...she was in a good mood. ...she was in a bad mood. 4. Towards a Formal Representation We have argued in this paper that a principled end unified semantics of natural language categories like tense, aspect and aspectual/temporal adverbials requires an ontology based on contingency rather than temporality. The notion of "nucleus" plays a crucial role in this ontology. The process of temporal reference involves reference to the appropriate part of a nucleus, where appropriateness is a function of the inherent meaning of the core expression, of the coercive nature of co- occurring linguistic expressions, end of particular end general knowledge about the area of discourse. The identification of the correct ontology is also a vital prelim- inary to the construction and management of temporal data- bases. Effective exchange of information between people and machines is easier if the data structures that are used to orgen- ise the information in the machine correspond in a natural way to the conceptual structures people use to organize the same information. In fact, the penalties for a bad fit between data- structures and human concepts are usually crippling for any attempt to provide natural language interfaces for data base systems. Information extracted from natural language text can only be stored to the extent that it fits the preconceived for- mats, usually resulting in loss of information. Conversely. such data structures cannot easily be queried using natural language if there is a bad fit between the conceptual structure implicit in the query and the conceptual structure of the data- base. The "contingency-based" ontology that we are advocating here has a number of implications for the construction and manage- ment of such temporal databases. Rather than a homogeneous 5 database of dated points or intervals, we should partition it into distinct sequences of causally or otlun~vise contingently related sequences of events which we will call "episodes", each lead- ing to the satisfaction of a particular goal or intention. This partition will quite incidentally define a partial temporal order- ing on the events, but the primary purpose of such sequences is more related to the notion of a plan of action or an explanation of an event's occurrence than to anything to do with time itself. It follows that only events that are continganfly related neces- sarily have well defined temporal relations in memory. A first atxempt to investigate this kind of system was reported by Steedman (1982), using a program that verified queries against a database structured according to some of the princi- ples outlined above. These principles can be described using Kowalski's event-calculus (Kowalski & Sergot 1986). In this f~amework, there are primitives called events, the occurrence of which usually implies the existence of periods of time over which states hold. In the terms of the present paper, these "events" are either "points" or "culminations" (depending on whether they are in fact associated with consequent states - see section 2). For example, in the world of academic promo- lions which Kowalski and Sea'got take as an example, an evant description like John was promoted from the rank of lecturer to the rank of professor is a culmination which implies that there was a period of time, ended by this event, during which John had the rank of lecturer, and there is a period of time, started by that same event, during which John had the rank of professor. The events in the event calculus are given unique identifiers, but are not necessarily associated with absolute time. More- over, they can be partially ordered with respect to each other, or occur simultaneously. Events themselves may also be described only partially;, later information can be added when it becomes available. These features, which they share with the corresponding primitives in a number of other formalisms, such as those of McDermott (1982), Allen (1984) and Lansky (1986), an constitute an advance over temporal representation formalisms based on the situation calculus (McCarthy & Hayes 1969)o Although Kowalski's events are undecomposable points or culminations, they can be used to describe extended events such as our processes, in terms of a pair identifying their start- ing point and to the point at which they stop (in the case of processes) or their culmination (in the case of culminated processes). This means that a process expression like John ran will introduce two events, one indicating the start of the pro- cess and one indicating the endpoint. Just like the point events considered by Kowalski and Sergot, these events have certain properties or states associated with them. The starting-point of the process referred to by uttering John ran marks the begin- ning of a progressive state that we refer to when we use a pro- gressive like John is running, a state which is terminated by the corresponding endpoint event. This duality between events and states (which was also exploited in Steedman, 1982), is very useful for representing the kind of ontology that we have argued natural language categories reflect. But one shortcoming of Kowalski's event calculus is the absence of other than temporal relations between the events. The best worked out event-based model that takes into account causal as well as temporal relations is Lansky's (1986). The representation she presents is based on GEM (Lamsky & Owieki 1983), a tool for the specification and verification of concurrent programs. GEM retries events and explicitly represents both their causal and temporal relations. It also provides mechanisms for structuring events into so- called "locations of activity", the boundaries on these locations being boundaries of causal access - as in our episodes. In Lan- sky (1986), the GEM tool is used to build event-based knowledge representations for use in planners. She suggests the use of three accessibility relations: temporal precedence (<), causality or contingency (@). and simultaneity ($). These relations have the following properties: < : irreflexive, antisymmetric, transitive @ : irreflexive, antisymmetric, inlransitive $ : reflexive, symmetric, transitive Because we follow Lansky in making the causality/contingency relation @ intransitive, we avoid certain notorious problems in the treatment of when-clauses and per- fects, which arise because the search for possible consequences of an event has to be restricted to theftrs~ event on the chain of contingencies. Thus, when (18a) and (b) are asserted, it would be wrong to infer (c): (18) (a) When John left. Sue cried (b) When Sue cried, her mother got upset (c) When John left, Sue's mother got upset The reason is exactly the same as the reason that it would be wrong to infer that Sue's mother got upset because John left, and has nothing to do with the purely temporal relations of these events. It should also be noted that the notion of causal- ity or contingency used here (in line with Lansky's proposals) is weaker than that used in other representation schemes (for example that of McDermott 1982) in that causality is here decoupled from eventuality: if an event A stands in a causal relation to event B, then an occurrence of A will not automati- cally lead to an occurrence of B: John laying the foundations of the house is a prereclulSlto for or enables him to build the walls and roof but does not "cause" it in the more Iraditional sense of the word and doe~ not automatically or inevitably lead to him building the walls. 5. Conclusion Many of the apparent anomalies and ambiguities that plague current semantic accotmts of temporal expressions in natural language stem from the assumption that a linear model of time is the one that our linguistic categories are most directly related to. A more principled semantics is possible on the assumption that the temporal categories of tense, aspect, aspecmal adverbi- als and of propositions themselves refer to a mental representa- tion of events that is smJcmred on other than purely temporal principles, and to which the notion of a nucleus or contingently related sequence of preparatory process, goal event and conse- quent state is central. We see this claim as a logical preliminary to the choice of any particular formal representation. However, certain properties of the event-based calculi of Kowalski and Sergot. and of Lan- sky. seem to offer an appropriate representation for a 3 A Prolog program incorporating the above exmnsion to the event calculus is under construction and will be presented in Moens (forthcoming). semantics of this kind. ACKNOWLEDGEMENTS We thank Ethel Schuster and Bonnie Lynn Webber for reading and commenting upon drafts. Parts of the research were sup- ported by: an Edinburgh University Graduate Studentship; an ESPRIT grant (project 393) to CCS, Univ. Edinburgh; a Sloan Foundation grant to the Cognitive Science Program, Univ. Pennsylvania; and NSF grant IRI-10413 A02, ARC) grant DAA6-29- 84K-0061 and DARPA grant N0014-85-K0018 to CIS, Univ. Pennsylvania. REFERENCES Allen, L. (1984). Towards a general theory of action and time, Artificial Intelligence, 23, pp. 123-154 Dowty, D. (1979). Word Meaning and Montague Grwnmar. Durdrecht" Reidel. Dowty, D. (1986). The effects of aspecmal class on the tern- pored structure of discourse. Linguistics and Philosophy 9, 37-62 Hinrichs" E. (1986) Temporal anaphora in discourses of English. Linguistics and Philosophy 9, 63-82. Kowalski, R. & M. Sergot (1986). A logic-based calculus of events. In New Generation Computing 4, 67-95. Isard, S.D. (1975). Changing the context In E. Keenan, (ed.), Formal Semantics of Natural Language, London, Cam- bridge University Press. Lansky, A. (1986). A representation of parallel activity based on events, structure and causality. In Work.shop on Plan. ning and Reasoning about Action, Timberline Lodge, Mount Hood, Oregon, 1986, 50-86. Lansky, A. & S. Owicki (1983). GEM: a tool for conct~ency specification and verification. In Proceedings of the Second Annual ACM Symposium on Principles of Distri. buted Computing, August 1983, 198-212. McCarthy, I. & PJ. Hayes (1969). Some philosophical prob- lems from the standpoint of artificial intelligence. In B. Meltzer & D. Michie (eds.) Machine Intelligence, Volume 4, 463-502. Edinburgh, Edinburgh University Press. McDermott. D. (1982). A temporal logic for mesoning about processes and plans. In Cognitive Science, 6, I01-155. Moens, M. & M. Steedman (1986). Temporal Information and Natural Language Processing. Edinburgh Research Papers in Cognitive Science, r[, Cenlre for Cognitive Sci- ence, University of Edinburgh. Moens, M. (forthcoming). Tense, Aspect and Temporal Refer. ence. PhD dissertation, University of Edinburgh. Partee, Barbara H. (1984). Nominal and temporal anaphors" Linguistics and Philosophy, 7, pp. PA3-286. Passonnean, Rebecca J. (1987). Situations ~nd intervals, paper to the 25th Annual Conference of the AC~ Stanford, July 1987. Reichenbach, H. (1947). Elements of Symbolic Logic. New York, Free Press, 1966. Schuster, E. (1986). Towards a computational model of ana- phora in discourse: reference to events and actions. Tech. Relx~ CIS-MS-86-34, CIS, Univ. of Pennsylvania. Smith, C. (1983). A theory of aspectual choice, Language, 59, pp.479-501. Steedman, M. (1982). Reference to past time. In Speech, Place andAction, R. Jarvella & W. Klein (eds.), 125-157. New York: WHey. Vendler, Z. (1967). Verbs and times. In Linguistics in Philoso- phy, Z. Vendler, 97-121. Ithaca. N.Y.: Comell University Press. Webber, B. (1987). The interpretation of tense in discourse. Paper to the 25th Annual Conference of the ACL, Stan- ford, July 1987.
1987
1
Constituent-Based Morphological Parsing: A New Approach to the Problem of Word-Recognition. Richard Sproat Linguistics Department AT&T Bell Laboratories 600 Mountain Ave Murray Hill, NJ 07974. Barbara Brunson* AT&T Bell Laboratories and Department of Linguistics University of Toronto Toronto, Ontario, Canada M5S 1A1. Abstract We present a model of morphological processing which directly encodes prosodic constituency, a notion which is clearly crucial in many widespread morphological processes. The model has been implemented for the Australian language Warlpiri and has been successfully interfaced with a syntactic parser for that language (Brunson, 1986). We contrast our approach with approaches to morphological parsing in the KIMMO framework. 1. Introduction The "Two-Level" Model of morphological processing developed by Kimmo Koskenniemi (1983), henceforth KIMMO, has spawned much subsequent research in the same framework (Karttunen, 1983; inter alia). Important design features of this model include a set of morpheme lexicons and a set of parallel finite state transducers which implement phonological rules mapping surface strings to lexical representations. Not only are phonological rules finite state, but the control structure of the model is itself finite state. Two criticisms of this model can be put forth. First, KIMMO is not guaranteed to be computationally efficient (Barton, 1986). Second, there are many interesting morphological phenomena that KIMMO cannot cover without significantly redesigning the model. In this paper we will address the second point. We will present a model of word-structure recognition which, unlike the KIMMO model, makes heavy use of prosodic constituent structure. Not only is reference to prosodic constituency necessary to provide a principled way of dealing with certain morphological processes, but such an approach to phonological processing is crucial for any interface of current parsing systems with speech recognition systems (Church, 1983). The model has been implemented for the Australian language Warlpiri. We will describe how the parser works, and how it handles morphological phenomena that would, at best, require inelegant mechanisms within the KIMMO model. We will also show how we can handle morphological phenomena that are not exemplified in Warlpiri but which are of a similar ilk. 2. Two Facts about Morphology We will now consider two issues in morphology, namely prosody and the non- isomorphism of syntactic and phonological structure. We maintain that these are are central to the task of a morphological analyzer and, hence, have incorporated them into our model. 2.1 The Relevance of Prosody to Morphology It has become increasingly evident from research within Generative Linguistics that 65 morphology cannot be limited to the concatenation and subsequent modification of strings of segments, but must recognize prosodic constituents devoid of segmental content (McCarthy, 1979; Levin, 1985). Work on reduplication I by Marantz (1982) and by Levin (1985) has argued convincingly that reduplication involves the preftxation or suffixation of a prosodic constituent which is empty of segmental information but which receives segmental specification by copying the segmental melody from the base. Furthermore, it has been suggested that infLxation 2 must be viewed as prefixation or suffixation of an affix to a prescribed prosodic subconstitucnt of a word rather than to the whole word. All of this work argues that prosody is a ~ucial component of morphology. It is necessary, therefore, that morphological processing systems should have a mechanism for dealing with prosody in a general way. KIMMO does not provide such a mechanism. Instead, it assumes that the problem of morphological recognition is one of matching some input string to a set of lexical strings. Prosodic considerations do not even enter the picture. The KIMMO model probably could be extended in various ways to cover such phenomena, but such extensions would constitute a significant change in the theory. Reduplication would require a particularly significant revision since it both involves reference to prosodic structure as well as a copy mechanism which is not finite state in any interesting sense. Note that although reduplication is strictly speaking bounded by the maximal size of some well-defined prosodic unit, and hence is effectively finite state, finite state recognition for reduplication would require the anticipation m i.e., precompilation m of all possible reduplicative-affix/stem sequences. Reduplication in natural language involves recognition of the language ww, a language which is well known not to be regular. As we shall see, reduplication is handled in our model by directly encoding prosody, and allowing for a bounded matching mechanism. 2.2 The Non.Isomorphism of Morphophonology and Morphosyntax Another fundamental property of morphology is the fact that the structure required for the phonology is not necessarily isomorphic to the structure required for the morphosyntax. This point has been argued extensively in work such as Marantz (1984) and Sproat (1985). For example, in Warlpiri a number of clitics which are suffixes as far as the phonology is concerned (i.e., they undergo Vowel Harmony 3 with the word to which they attach) are separate words from the point of view of the syntax. For instance, the auxiliary in Warlpiri tensed clauses generally occurs as the second syntactic constituent of the sentence; phonologically, however, it is part of the first constituent. This phenomenon is by no means limited to scattered examples in a few languages, but apparently represents a very important generalization about the interaction of phonology and syntax in the morphology they operate over different, though related structures. We propose to capture this observation by making the syntactic module of the parser largely independent of the phonological module, as we shall outline below. 3. A Description of the Warlpiri Parsing System The main reason for choosing Warlpiri for our test domain is that Warlpiri provides a sufficient number of interesting morphological and phonological phenomena m such as Vowel Harmony and reduplication -- without having an overabundance of phonological rules (unlike Finnish which has roughly 20 rules in the KIMMO description). It is thus possible to build a system which has a reasonable coverage of the morphological and phonological processes evident in the language. At the same time, in order to cover the Warlpiri data the system must be designed to handle morphological processes whose description crucially depends upon prosodic constituency. The task of the morphophonological parser is to f'md out where the word boundaries are and then where the morphemes are. It receives as input a stream of segments and a parallel stream of suprasegmental stress information. 66 The input streams may represent a single word or they may represent a sequence of words; in any case, no word or morpheme boundaries are provided in the input. The parser checks to see if a morpheme sequence can correspond to the input stream by verifying that the appropriate phonological rules apply in the appropriate domains. It then passes a 'flattened representation' of the morphological structure, consisting merely of the morphemes in their linear order with word boundaries, off to the syntactic parser. The syntactic parser for Warlpiri which we have been using is due to Brunson (1986). This parser was designed to take as input a sequence of morphemes rather than a sequence of fully formed words as most syntactic parsers do. Such a parser embodies our belief that the the task of building a syntactic representation for words should be handled by the syntactic parser and not by a separate morphosyntactic parser. In this way clitics can readily be identified in their syntactic roles independent of their phonological constituency. Let us now turn to a concrete example from Warlpiri and show how we parse the morphemes and pass on the 'flattened representation' to the syntactic parser. 4. Parsing the Morphophonology We will take as an example for discussion the word /pangupangurnu/, which means 'dug repeatedly' and which is composed of the morphemes Reduplication + pangi + rnu, (pangi = 'dig', rnu --- 'past') (Nash, 1980), where Reduplication is the verbal reduplication morpheme. Of interest in this example are regressive Vowel Harmony 4, and, of course, reduplication. The input consists of the stream of segments and a stream of stressesS: pangupangu r nu 1 2 There is a question of course as to whether one could reliably derive stress information from connected speech input. Preliminary studies of Warlpiri intonation suggest that main word stress at least is extractable from acoustic input (see Figure I). We presume, however, that other phonetic facts may also help determine the prosody; see Church (1983) for a method for determining English prosodic constituents from observable allophonic variation. The f'n'st task is to find the prosodic constituents, i.e. to find where the syllables are, where the feet ~ are, and where the prosodic words are. The particular parsing algorithm we adopt is that of Church (1983), which is not left-to-right, but nothing hinges on this decision; indeed, as we point out below, we will ultimately want a left-to-right parsing algorithm so that the phonological and syntactic parsing can be interleaved. The prosody of Warlpiri is simple in that syllable types are limited and phonological words are reliably left-stressed. In the particular example, the parser will tell us that the syllables are /pa/, /ngu/, /pa/, /ngu/ and /rnu/ (the sequences ng and rn represent single segments), that the feet are /pangu/ and /pangurnu/ and that there is a single prosodic word, namely/pangupangurnu/. Having done the prosody, we proceed to look up the morphemes which might plausibly comprise the word. Warlpiri quite generally requires that morphemes be syllabifiable strings. The only exceptions to this are suffixes which consist of the sequence [sonorant] [stop] [vowel], for example the imperfective auxiliary base Ipa. We can therefore find all possible morphological decompositions for a word by checking all [sonorant][stop][vowel] sequences and all well-formed syllable sequences and seeing if the strings spanning them correspond to known morphemes. Lexical lookup is complicated due to the fact that the surface string can differ from the underlying representation of the morpheme in several ways. This can come about by the application of phonological rules. We implement lexical access in such cases by hashing on underspccified feature representations. In Warlpiri the only complication of this sort involves rounding of high vowels: for example, lexical /i/ may surface as /i/ or /u/ depending upon the harmony context. In the verb root pangi will therefore match the input sequences /pangi/ and/pangu/. 67 ........................... ~LL] LL]L] _LL~- _ ............... ::::: ::: ::::::: ::i~'~[ ii:: !ii ............. ' ............... '::;:~":';"~; i ";'; :'1"~';":'~:::I -l- ;, :;'; -:- I ;_~;,; 4--;'~: ; ,~" ; :';";'i : ; ";' : ............. ? ............. I'!'!'!"!T'.."!!'!'!'T.."!'!' :!'!"'~"i!"i..'Ti'?"r'!" Illr-!-!.!..~-!-t.!........ iiiiii!!!iiiii~::ii]i::::i!i::ii~::!i:: ::i::i ............ t ................ i.i..~..~..!..i ~..;.!..7.i..~. ~.~.I..H..::..! i..!..!..i.i..H..i.~ i.,i.i..i..i.l.i..i..i..., .............. : .......... ,~.i.l.i._~!~.~.~-~..~.~ "~ I~"- : " " ~' -- " ............. ............... i.~.~..~..;.i....~.i.i, i ..i....~....H....;.....i..L.;.~. H..-:.~.~.;.I.,....;..... .............................. ) ,,.;..~ ......... ; | i i ............... i iii/iil-iilLi I i ...... _4..~.~a.:_: ' ; , . . . ..... ._.-_ .... ~ ~" -,.~_~., .... t ~~'~:~;~'~ ~ ' .............. ! ii.i.i ii i i.i.~.~ ~ ! i i ~ ~ i i : . i . i , i. i i.i .!.' i..i.i.i.i..14 i I~ ~ i.:.i.~..i.:.i.~.ii~~ .... i ............... i ............ i4..;.!. ~.i..i..i..;.~.4.~..~.4..i ~..;.;.i.l..~..~..~.::.,:.;.~.-..i..i.i...~,.... .......... ~ - ........... J , ~ i~-~ ~ 2 ~ : ~: ~"~ iiiii!iiiiiiii~iiiii!!!iii~iiiiiiii! ............. I{; ............... i!i.ii.ii.~-;-iil.;~.i..;.i.i ..... i.i,.i !.i I~,.i..i...i.i.~..i:. i ii!ii~;ii.~ii~,r~i ~'ii i i: t:. i ! iii !ii '~i!i:::i:i::i:! ii ::::~:::~. ........... ,~', ............... ~.:.:..~.;,!.;.:.;.:.~.;.,;~e.,-..~.:.[.L-...;,.LL~.-.L'r-' .:.i.'.~..i.L-:.~.;.i. .~ ~ i: i! !:~! i i::::i:: tl ,:..~ .... ........... ~ ~ ...... ~ ~ ._i '~-..:..],iJi ~ii~ i - ~ . , ~ ~ ; ~ ::::::::::::::::::::::::::: ,::::!::. ........................ ~ ] .... -----~. : : : ~ ~-'~~-..-T..~..,,~.~-~~ ........... ~]~ .............. ~.i.!..~_~-i-.i.,i..i.!..~.i..!.i..i.-! .i..i.-i..i-i.,i-.i... ~ .i..i..i..i..i.i.i,,:,.~... _~ I:I ~ ! i:::iii:? il!:illi ~:i:]iii?i ........... ~I L ............. ~.;.. ~'.~...:.-..i.i..;..;...:..-...~...;. ....~..~.i....-...-.~..~........i.;...~..;.... ~. !::ii Uii.-~,::::i: :!iii i~;lii~i;!~iiiiii. !~iii!iiiiiiiiiiiiiiiiiiiiiii: : I .... ' .... I .... ' .... I .... ' .... I .... ' .... I 0 ~f~ o In 0 ~ 0 ~.~ ,~ o.~ ~ r,n ~ " ~ 3 w o o 0 ~ ~0 ~-~ g~ ~.; 68 Another way in which the surface representation of a morpheme may differ from its underlying representation is if it does not contain any segmental information, but merely information about prosodic shape. This type of morphology manifests itself in Warlpiri as reduplication. Briefly, the verbal reduplicative prefix is listed as a bimoraic foot: i.e., a foot of the form CV(C)(C)V. Whenever we see such a constituent, we posit the existence of verbal reduplication subject to immediate verification if it matches the phonological material to its right. For Warlpiri, "matches" is "string equivalent to". For other languages, a more sophisticated notion of matching would be necessary. This would be necessary when phonological rules apply to only one part of the reduplicated pair. In/pangupangurnu/, the first sequence /pangu/ is a bimoraic foot, and furthermore it matches appropriately with the sequence to its right. Therefore we can here posit the existence of a verbal reduplicative affix. Having found the possible morphemes, we have a lattice of morphemes spanning the input. In the example case, we have a lattice with a unique path comprising Verbal- Reduplication, pangi, rnu. We now wish to check that, from a phonological point of view alone, the affixes can be combined in the order given. That is, the affix path must be well-formed according to a morphophonological grammar for Warlpiri. We can state the morphophonological grammar simply as follows (where VHD stands for 'Vowel Harmony Domain'): Word - (Prefix) VHD VHD - [Root Suffix*] N Vowel-Harmony The first rule indicates that a word consists of an optional prefix followed by a Vowel- Harmony-Domain; the second claims that a Vowel-Harmony-Domain is a string analyzable as a root followed by some number of suffixes taken together with the Vowel Harmony process. We check the application of phonological rules, such as Vowel Harmony, by checking to see that the sequence of surface segments can be paired with the sequence of lexical segments in the underlying morphemes and that the surface string is well-formed according to the statement of the rules. This we do by a mechanism formally equivalent to the finite state transducer mechanism of the KIMMO model. In particular, we implement phonological rules as rejection sets (Koskenniemi, 1983), which are stated as regular expressions over the set of possible lexical/surface segment correspondences. However, in our model, phonological rules are defined for particular domains of application rather than continuously applying as in the KIMMO parser for Finnish. For example, Warlpiri Vowel Harmony is defined to apply over the sequence consisting of a root followed by its suffixes, but not over preffLxes. ~ Having established the identity of the morphemes of the word, and having further established that each potential morphological analysis is well-formed from a phonological point of view m i,e, the morphemes are in the right order and the relevant phonological rules have applied correctly over the appropriate domains n we then pass the morphological analysis off to the syntactic parser. More specifically, we pass off what we call a "flattened representation" which encodes only the information as to what order the morphemes occur in and where the word boundaries are. Arguably the syntactic parser does need to know where the phonological words and phrases are, but the fine details of the phonological structure are not needed. The potential non-isomorphism between phonological and syntactic structure is derived from the narrow bandwidth of the channel between the phonological and syntactic components of the parser. This non- isomorphism is illustrated when a morpheme which is phonologically an affix is syntactically a separate word n this is the case with cliticization. Also exemplary of the division of duty between the morphophonological parser and the syntactic parser is the dual status of subcategorization in Warlpiri. For example, the ergative case suffix has two forms m/rlu/ and /ngku/. Both are subcategorized to occur with nominals, a fact that is crucial in the projection and selection of syntactic constituency. The choice between /rlu/ and /ngku/, on the other hand, is conditioned by subcategorization with respect to the prosodic 69 structure of the stem m/ngku/being restricted to bimoraic stems. This subcategorization is only an issue for the morphophonological parser, and is never even visible to the syntactic parser. In Figure 2 we give an illustration of the behavior of the morphological and syntactic parsers on a more complicated example: Ngarrka-ngku.ka marlu marna-kurra luwa.rnu ngarni.nja-kurra (man-ergative-aux kangaroo grass-obj shoot-past eat-infmitive-obj) 'The man is shooting the kangaroo while it is eating grass.' This example illustrates a number of instances of phonological and syntactic mismatch. $. Extensions and Improvements to the Current Work The model proposed here, although designed and implemented for Warlpiri, is intended to be a general approach to morphological parsing. A number of extensions can easily be made and a number of design improvements are necessary. First, reduplication, as we have noted, is only one of the kinds of morphology which are best defined in terms of prosodic constituents. The morphology of Arabic verbs (McCarthy, 1979) is another example of this, as is infixation. While Warlpiri does not exhibit these morphological processes, there would be no problem extending the parser to cover languages which do, since it is already designed to handle prosodically defined morphology. Another problem which comes up in the current implementation is that the ordering of syntactic parsing after morphological parsing fails to identify syntactically ill-formed words as early as possible. To give a simple example from English, the string analyz-iti-able is arguably well-formed as far as the phonology is concerned, but is ill-formed syntactically since -ity attaches to adjectives, not to verbs, and .able attaches to adjectives, not to words ending in -ity, which are themselves invariably nouns. The current parsing system would discover that such a word was well-formed phonologically, only to realize that the word was in fact ill-formed when the syntax was reached. Needless to say, the solution is to interleave the phonological and syntactic analyses. Sequences like analyz.iti.able would then be detected early as ill-formed. 6. Summary To summarize, we have built a morphological parsing system for Warlpiri which directly encodes prosodic notions and which also encodes the kind of non-isomorphy between phonological and syntactic representations exhibited in natural languages. We have argued that it is necessary for any general theory of morphological processing to encode these notions. We view the parsing system as a partial but general theory of morphological processing, and the work we have done on Warlpiri as a particular instantiation of this general model. Acknowledgments We would like to thank Mary Laughren and Ken Hale for their advice on Warlpiri. Notes * This work was partially supported by the Social Sciences and Humanities Research Council of Canada. [1] Reduplication is a word formation process involving the repetition of a word or a part of a word. As an example, in Warlpiri there is a process of nominal reduplication to form the plural: kurdu 'child' m kurdukurdu 'children'. [2] Inf'txation, like prefixation and suffixation, involves the attachment of an affix to a word; but, unlike these other two processes, an infixed affix occurs within the word rather than at the edge of the word. [3] Vowel Harmony is a phonological process in which the vowels within a certain domain (usually a word) must agree in some set of features. [4] The/i/of the verb stem is changed due to the following/u/ of the past tense morpheme. This contrasts with /pangipangirni/ 'dig 70 Figure 2 PH*WORD Pfl-Wl~lO STRATUM 1 PH-WOI~ PH-WORD STRA~IM 1 STRATUM 1 PH-WORD STRATUM 1 STRATUM 1 STRAllJM 1 STRATUM t STRATUM 1 SlltA~JM I STRATUM 1 STRATUM 1 F~i 5UF7 2-1mOS*AUK NOOT ROOT ~ illoolr-v2 V2-SUFT'R ROOT-V6 ~UFT~ o6rkaokukaml lum~oakurilOusO ig~oi njakura (a) N, BdLN, M WG:J:r4 HG1'17 al8 g~ T{P-J~IR~ MA~.n all P~ M AJLIf d all ~UA ~ liB! PIO V'IA'RI jI M@AJUf| WJA ~UA (b) Figure 2a is the phonological representation for the sentence: ngarrka.ngku.ka marlu marna.kurra luwa.rnu ngarni.nja.kurra 'The man is shooting the kangaroo while it is eating grass.' Figure 2b is the syntactic representation for that sentence. Note that the bracketing into phonological words is not isomorphic with the syntactic bracketing. 71 repeatedly, where the nonpast morpheme, rni, does not trigger such a stem change. [5] Vowels bearing primary stress are aligned with 1, those bearing secondary stress are aligned with 2. [6] A foot is a level of metrical structure intermediate between the syllable and the word. [7] These domains correspond to the strata of Lexical Phonology (Kiparsky, 1982; Mohanan, 1982; inter alia). References Barton, E. (1986). "Computational Complexity in Two-Level Morphology." Proceedings of the 24th Conference of the Association for Computational Linguistics, 53-59, Columbia University, New York. Brunson, B. (1986). A Processing Model for Warlpiri Syntax and Implications for Linguistic Theory. M.A. Thesis, University of Toronto, forthcoming as a TR of the Computer Science Department, University of Toronto. Church, K. (1983). Phrase-Structure Parsing: A Method for Taking Advantage of Allophonic Constraints. Ph.D. Thesis, MIT, published by IULC. Karttunen, L. (1983). "KIMMO: A Two-Level Morphological Analyzer." Texas Linguistic Forum, 22, 165-186. Kiparsky, P. (1982). "Lexical Phonology and Morphology." in Linguistics in the Morning Calm, Linguistic Society of Korea. Seoul: Hanshin. Koskenniemi, K. (1983). Two-Level Morphology: A General Computational Model for Word-Form Recognition and Production. Ph.D. Thesis, University of Helsinki. Levin, J. (1985). A Metrical Theory of Syllabicity. Ph.D. Thesis, MIT. Marantz, A. (1982). "Re Reduplication." Linguistic Inquiry. 13(3): 435-482. (1984). On the Nature of Grammatical Relations. Cambridge, MA: MIT Press. McCarthy, J. (1979). Formal Problems in Semitic Phonology and Morphology. Ph.D. Thesis, MIT, published by IULC. Mohanan, K.P. (1982). Lexical Phonology. Ph.D. Thesis, MIT, published by IULC. Nash, D. (1980). Topics in Warlpiri Grammar. Ph.D. Thesis, MIT. Sproat, R. (1985). On Deriving the Lexicon. Ph.D. Thesis, MIT. 72
1987
10
PREDICTIVE COMBINATORS" A METHOD FOR EFFICIENT PROCESSING OF COMBINATORY CATEGORIAL GRAMMARS Kent Wittenburg MCC, Human Interface Program 3500 West Balcones Center Drive Austin, TX 78759 Department of Linguistics University of Texas at Austin Austin, TX 78712 ABSTRACT Steedman (1985, 1987) and others have proposed that Categorial Grammar, a theory of syntax in which grammati- cal categories are viewed as functions, be augmented with operators such as functional composition and type raising in order to analyze • noncanonical" syntactic constructions such as wh- extraction and node raising. A consequence of these augmentations is an explosion of semantically equivalent derivations admitted by the grammar. The present work proposes a method for circumventing this spurious ambiguity problem. It involves deriving new, specialized combinators and replacing the orginal basic combinators with these derived ones. In this paper, examples of these predictive combin~tor8 are offered and their effects illustrated. An al- gorithm for deriving them, as well as s discussion of their semantics, will be presented in forthcoming work. Introduction In recent years there has been a resurgence of interest in Categorial Grammar (Adjukeiwicz 1935; Bar-Hillel 1953). The work of Steedman (1985, 1987) and Dowry (1987) is rep- resentative of one recent direction in which Categorial Gram- mar (CG) has been taken, in which the operations of func- tional composition and type raising have figured in analyses of "noncanonical" structures such as wh- dependencies and nonconstituent conjunction. Based on the fact that such operations have their roots in the ¢ombinator~/ c~lc~lua (Curry and Feys 1958), this line of Categorial Grammar has come to be known as Combinatory Categorial Grammar (CCG). While such an approach to syntax has been demonstrated to be suitable for computer implementation with unification-based grammar formalisms (Wittenburg 1986a), doubts have arisen over the efficiency with which such grammars can be processed. Karttunen (1986), for in- stance, argues for an alternative to rules of functional com- position and type raising in CGs on such grounds. 1 Other researchers working with Categorial Unification Grammars consider the question of what method to use for long-distance dependencies an open one (Uszkoreit 1986; Zeevat, Klein, and Calder 1986). The property of Combinatory Categorial Grammars that has occasioned concerns about processing is spurious am- biguity: CCGs that directly use functional composition and type raising admit alternative derivations that nevertheless result in fully equivalent parses from a semantic point of view. In fact, the numbers of such semantically equivalent derivations can multiply at an alarming rate. It was shown in Wittenburg (1986a) that even constrained versions of func- tional composition and type raising can independently cause the number of semantically equivalent derivations to grow at rates exponential in the length of the input string. 2 While this spurious ambiguity property may not seem to be a par- titular problem if a depth-first (or best-first) parsing algo- rithm is used-after all, if one can get by with producing just one derivation, one has no reason to go on generating the remaining equivalent ones-the fact is that both in cases where the parser ultimately fails to generate a derivation and where one needs to be prepared to generate all and only genuinely (semantically) ambiguous parses, spurious am- biguity may be a roadblock to efficient parsing of natural language from a practical perspective. The proposal in the present work is aimed toward eliminating spurious ambiguity from the form of Com- binatory Categorial Grammars that are actually used during parsing. It involves deriving a new set of combinators, termed predictive combinators, that replace the basic forms of functional composition and type raising in the original grammar. After first reviewing the theory of Combinatory Categorial Grammar and the attendant spurious ambiguity problem, we proceed to the subject of these derived com- binators. At the conclusion, we compare this approach to other proposals. iKarttunen suggests that these operations, at least in their most general form, are computationally intractable. However, it should be noted that neither Steedman nor Dowty has suggested that a fully general form of type rais- ing, in particular, should be included as a productive rule of the syntax. And, as Friedman, Dai, and Wang (1986) have shown, certain constrained forms of these grammars that nevertheless include functional composition are weakly context-free. Aravind Joshi (personal communication} strongly suspects that the generative capacity of the gram- mars that Steedman assumes, say, for Dutch, is in the same class with Tree Adjoining Grammars (Joshi 1985) and Head Grammars (Pollard 1984). Thus, computational tractability is, I believe, not at issue for the particular CCGs assumed here. 2The result in the case of functional composition was tied to the Catalan series (Knuth 1975), which Martin, Church and Patil (1981) refer to as =almost exponential'. For a particular implementation .of type raising, it was 2 n'1. The fact that derivations grow at such a rate, incidentally, does not mean that these grammars, if they are weakly context- free, are not parsable in n 3 time. But it is such ambiguities that can occasion the worst case for such algorithms. See Martin, Church, and Patti (1981) for discussion. 73 Overview of CCG The theory of CombinatoriaJ Categorial Grammar has two main components: a categorial lexicon that assigns grammatical categories to string elements and a set of com- binatory rules that operate over these categories. 3 Categorlal lexicon The grammatical categories assigned to string elements in a Categorial Grammar can be basic, as in the category CN, which might he assigned to the common noun man, or they may he of a more complex sort, namely, one of the so-called functor categories. Functor categories are of the form XIY , which is viewed as a function from categories of type Y to categories of type X. Thus, for instance, a determiner such as the might be ~igned the category NPICN , an indication that it is a function from common nouns to noun phrases. An example of a slightly more complex functor category would be tensed transitive verbs, which might carry the cate- gory (SINP)INP. This can be viewed as a second order func- tion from (object) noun phrases to another function, namely SINP , which is itself a function from (subject) noun phrases to sentences. 4 (Following Steedman, we will sometimes ab- breviate this finite verb phrase category as the symbol FVP.) Directionality is indicated in the categories with the following convention: a righ~slanting slash indicates that the argument Y must appear to the right of the functor, as in X/Y; a left- slanting slash indicates that the argument Y must appear to the left, as in X\Y. 5 A vertical slash in this paper is to be interpreted as specifying a directionality of eith~" left or right. Combinatorial rules Imposing directionality on categories entails including two versions of the basic functional application rule in the gram- mar. Forward functional application, which we will note as 'fa>', is shown in (la), backward functional application ('ra<') in (Ib). (t) a. Forward Functional Application (fa~) X/Y Y => X b. Backward Functional Application (fa<) Y X\Y => X An example derivation of a canonical sentence using just these comhinatory rules is shown in (2). C2) S .............................. fa< S\NP (=FVP) NP NP ......... fa> ........... f~> NP/CN CN S\NP/NP SPIES CS . . . . . . . . . . . . . . . . . . . . . . _ _ ~he man ate the came Using just functional application results in derivations that typically mirror traditional constituent structure. However, the theory of Combinatory Categorial Grammar departs from other forms of Categorial Grammar and related theories such as HPSG (Pollard 1085; Sag 1987) in the use of functional composition and type raising in the syntax, which occasions partial constituents within derivations. Functional composition is a combinatory operation whose input is two functors and whose output is also a funetor composed out of the two inputs. In (3) we see one instance of functional com- position (perhaps the only one) that is necessary in English. 6 (3) Forvard functional compost,,ton (fc>) X/Y Y/Z => XlZ The effect of type raising, which is to be taken as a rule schema that is iustantiated through individual unary rules, is to change a category that serves as an argument for some functor into a particular kind of complex functor that takes the original functor as its new argument. An instance of a type-raising rule for topicalized NPs is shown in (4a/; a rule for type-raising subjects is shown in (4h) in two equivalent notations. (4) a. Toptcaltza%ton (Cop) NP => S/(S/NP) b. SubJec~ ~ype-ralsing (s~r) NP => S/FVP [NP => Sl (s\m,) ] The rules in (3) and (4) can be exploited to account for unbounded dependencies in English. An instance of topicalization is shown in (,5). 31n Wittenburg (1986a), a set of unary rules is also sumed that may permute arguments and shift eategories in various ways, but these rules are not germane to the present discussion. 4When parentheses are omitted from categories, the bracketing is left, associative, i.e., SINP[NP receives exactly the same interpretation as (SINP)INP. 5Note that X is the range of the functor in both these expressions and Y the domain. This convention does not hold across all the categorial grammar literature. 6Functional composition is known as B in the com- binatory calculus (Curry and Feys 1958). 7The direction of the slash in the argument category poses an obvious problem for cases of subject extraction, a topic which we will not have space to discuss here. But see Steedman (1087). 74 s/(S/NP) ...... top ..... sir NP NP FVP/S FVP/NP Apples he said John hatesl (6) S S/NP S/FVP ....................... fC> S/S .............. fC> S/FVP SlFVP ..... S~r NP Such analyses of unbounded dependencies get by without positing special conventions for percolating slash features, without empty categories and associated ~-rules, and without any significant complications to the string-rewriting mechanisms such as transformations. The two essential in- gredients, namely, type-raising and functional composition, are operations of wide generality that are sufficient for han- dling node-raising (Steedman 1985; 1987) and other forms of nonconstituent conjunction (Dowry 1987). Using these methods to capture unbounded dependencies also preserves a key property of grammm-s, namely, what Steedman (1985) refers to as the ad.~acency property, maintained when string rewriting operations are confined to concatenation. Gram- mars which preserve the adjacency property, even though they may or may not be weakly context-free, nevertheless can make use of many of the parsing techniques that have been developed for context-free grammars since the ap- plicability conditions for string-rewriting rules are exactly the same. The spurious amblgulty problem A negative consequence of parsing directly with the rules above is an explosion in possible derivations. While func- tional composition is required for long-distance dependencies, i.e., a CCG without such rules could not find a successful parse, they are essentially optional in other cases. Consider the derivation in (6) from Steedman (19.85). ~ (e) S S/NP S/VP SlFVP S/S ..................... fC> S/S' .............. fC> S/VP ........ fc> s/Fw FvP/vP vP/S' s'/s s/FvP FVP/VP VP/NP NP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I can believe that she will eat cakes This is only one of many well-formed derivations for this sen- tence in the grammar. The maximal use of functional com- position rules gives a completely left branching structure to the derivation tree in (6); the use of only functional applica- tion would give a maximally right-branching structure; a to- tal of 460 distinct derivations are in fact given by the gram- mar for ~his sentence. Given that derivations using functional composition can branch in either direction, spurious ambiguity can arise even in sentences which depend on functional composition. Note, for instance, that if we topicalized cMces in (6), we would still be able to create the partial constituent S/NP bridging the string I can bdi~e that she will eat in 132 different ways. Some type-raising rules can also provoke spurious am- biguity, leading in certain cases to an exponential growth of derivations in the length of the string (Wittenburg 1986a). Here again the problem stems from the fact that type-raising rules can apply not just in cases where they are needed, but also in cases where derivations are possible without type rais- ing. An example of two equivalent derivations made possible with subject type-raising is shown in (7). (7) &. S ........... fa< ]~P S\NP John walks b. S ................ f&> s/(s\~) ....... ST, r sP s\sP John walks Note that spurious ambiguity is different from the classic ambiguity problem in parsing, in which differing analyses will be associated with different attachments or other linguis- tically significant labelings and thus will yield differing semantic results. It is a crucial property of the ambiguity just mentioned that there is no difference with respect to their fully reduced semantics. While each of the derivations differs from the others in the presence or absence of some intermediate constituent(s), the semantics of the rules of functional composition and type raising ensure that after full 9 reductions, the semantics will be the same in every case. Predictive eomblnators Here we show how it is possible to eliminate spurious am- biguity while retaining the the analyses (but not the derivations) of long-distance dependencies just shown. The proposal involves deriving new combinatory rules that replace functional composition and the ambiguity-producing type-raising rules in the grammar. The difference between the original grammar and this derived one is that the new combinators will by nature be restricted to just those deriva- tional contexts where they are necessary whereas in the original grammar, these rules can apply in a wide range of contexts. The key observation is the following. Functional com- position and certain type raising rules are only necessary (in the sense that a derivation cannot be had without them) if 8We do not show the subject type-raising rules in this derivation, but assume they have already applied to the sub- ject NPs. 9This equivalence holds also if the "semantics* consists of intermediate f-structures built by means of graph- unification-based formalisms ~ in Wittenburg (1986a). 75 categories of the form XI(YIZ ) appear at one end of deriva~ tional substring. This category type is distinguished by having an argument term that is itself a functor. As proved by Dowry (1987), adding functional composition to Categorial Grammars that admit no categories of this type has no effect on the set of strings these grammars can generate, although of course it does have an effect on the number of derivations allowed. When CGs do allow categories of this type, then functional composition (and some instances of type raising) can be the c~-ucial ingredient for success in derivations like those shown in schematic form in (S). Ce) 9,. X Y/Z X/(Y/Z) Y/~ {~/W WI ...... IZ b. X .............................. fa< Y/g Y/~ Q/W W/ ...... /Z X\CYIZ) These schemata are to be interpreted as follows. The cate- gory strings shown at the bottom of (8a) and (gb) are either lexical category assignments OR (as indicated by the carets) categories derivable in the grammar with rules of functional application or unary rules such as topicalization. Recall that CCGs with such rules alone have no spurious ambiguity problem. The category strings underneath the wider dashed lines are then reducible via (type raising and) functional com- position into functional arguments of the appropriate sort that are only then reduced via functional application to the X terms. 10 It is this part of the derivation, i.e., the part represented by the pair of wider dashed lines, in which spurious ambiguity shows up. Note that (5) is intended to be an example of the sort of derivation being schematized in (8a): the topicalization rule applies underneath the leftmost category to produce the X/(Y/Z) type; all other categories in the bottommost string in (8a) correspond to lexical category assignments in ($). There are two conditions necessary for eliminating spurious ambiguity in the circumstances we have just laid out. First, we must make sure that function composition (and unary rules like subject type-raising) only apply when a higher type functor appears in a substring, as in (8). When no such higher type functors appears, the rules must then be absent from the picture-they are unnecessary. Second, we must be sure that when function composition and unary rules like subject type-raising do become involved, they produce unique derivations under conditions like (8), avoiding the spurious ambiguity that characterizes function composition and type raising as they have been stated earlier. 10While we have implied (as evidenced by the right- leaning slashes on intermediate categories) that forward func- tional composition is the relevant composition rule, back- wards functional composition could also be involved in the reduction of substrings, as could type raising. The natural solution for enforcing the first condition is to involve categories of type X[(YIZ ) in the derivations from the start. In other words, restricting the application of func- tional composition and the relevant type-raising rules is pos- sible if we can incorporate some sort of top-down, or predic- tive, information from the presence of categories of type X[(YIZ). Standard dotted rule techniques found in Eariey deduction (Earley 1970) and active chart parsing (Kay 1980) offer one avenue with which to explore the possibility of ad- ding such control information to a parser. However, since the information carried by dotted rules in algorithms designed for context-free grammars has a direct correlate in the slashes already found in the categories of a Categorial Grammar, we can incorporate such predictive information into our grammar in categorial terms. Specifically, we can derive new combinatorial rules that directly incorporate the • top-down" information. I call these derived combinatorial rules predictive combinators. 11 It so happens that these same predictive combinators will also enforce the second condition mentioned above, by virtue of the fact that they are designed to branch uniformly from the site of the higher type functor to the site of the "gap'. For cases of leftward extraction (aa), derivations will be uniformly left-branching. For cases of rightward extraction (8b), derivations will be uniformly right-branching. It is our conjecture that CCGs can be compiled so as to force uniform branching in just this way without al'fecting the language generated by the grammar and without altering the semantic interpretations of the results. We will now turn to some ex- amples of the derived combinatory rules in order to see how they might produce such derivations. The first predictive combinator we will consider is derived from categories of type X/(Y/Z) and forward functional com- position of the a~'gument term of this category. It is designed for use in category strings like those that appear in (8a). The new rule, which we will call forward-predictive functional composition, is shown in (9). (9) Forward-predtct~.ve forward func~ional composl ~,lon (fpfc>) x/CYlZ) YlW => XlCW/z) Assuming a CCG with the rule in (9) in place of forward functional composition, we are able to produce derivations such as (10). Here, as in some earlier examples, we assume Subject type-raising has already applied to subject NP categories. 11There is a loose analogy between these predictive ¢om- binators and the concept of supercombinators first proposed by Hughes (1982). Hughes proposed, in the context of corn- pilation techniques for applicative programming languages, methods for deriving new combinators from actual programs. He used the term supercomblnators to distinguish this derived set from the fixed set of combinators proposed by Turner (1979). By analogy, predictive combinators in CCGs are derived from actual categories and rules defined in specific Combinatory Categorial Grammars. There are in principle infinitely many of them, depending on the par- ticulars of individual grammars, and thus they can be distin- guished from the fixed set of "basic" combinatorial rules for CCGs proposed by Steedman and others. 76 (10) S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . f$1~ Sl (VPINP) ..................................... fpfc> Sl (~'VPIm) ............................... fpfc> Sl (S/m) ......................... fpfc> s~ (s'/m) .................... fpfc> Sl (w/in ~) .............. fpfc> Sl (ZVPIm) ........ fpfc> S/(S/m) .... top NP S/FVP FVP/VP VP/S ~ S'/S S/FVP FVP/VP VP/NP cakes I can believe that she will eat We took note above of the fact that there were at least 132 distinct derivations for the sentence now appearing in (10) with CCGs using forward functional composition directly. With forward-predictive forward functional composition in its place, there is one and only one derivation admitted by the grammar, namely, the one shown. In order to see this, note that the string to the right of cakes is irreducible with any rules now in the grammar. Only fpfc~> can be used to reduce the category string, and it operates in a necessaxily left branching fashion, triggered by an X/(Y/Z) category at the left end of the string. A second predictive combinator necessary to fully incor- porate the effects of forward functional composition is a ver- sion of predictive functional composition that works in the reverse direction, i.e., backward-predictive forward func- tional composition. It is necessary for category strings like those in (8b), which are found in CCG analyses of English node raising (Steedman 1985). The rule is shown in (11). (11) Backward-predictive fo~ard functional composition (bpfc>) wlz x\ (Y/z) => x\ (Y/W) Intuitively, the difference between the backward- predictive and the forward-predictive versions of function composition is that the forward version passes the "gap" term rightward in a left-branching subderivation, whereas the backward version passes the "principle functor" in the ar- gument term leftward in a right-branching subderivation. We see an example of both these rules working in the case of right-node-raising shown in (12). It is assumed here, as in Steedman (1985), that the conjunction category involves finding like bindings for category variables corresponding to each of the conjuncts. We use A and B below as names for these variables, and the vertical slash must be interpreted here as a directional variable as well. Note that bindings of variables in rule applications, as say the X term in the in- stance of pbfc~, can involve complex parenthesized categories (recall that we assume left-association) in addition to basic ones. (12) S ............................................ fa~ S/NF ........................................ fa~ (s/~) I (FW/m) .................................. fpfc> (s/tnD / (s/m) (A/NF) / (A/NP) \ (A/FVF) ....................... bpfc> SIFVP FvP/m (A I B) / (A I B) \ (A I B) S/F'v'P FVP/NP m John baked but Harry ate X It is our current conjecture that replacing forward func- tional composition in CCGs with the two rules shown will eliminate any spurious ambiguity that arises directly from this composition rule. However, we have yet to show how spurious ambiguity from subject type-raising can be eliminated. The strategy will be the same, namely, to replace subject type-raising with a set of predictive com- binators that force uniformly branching subderivations in cases requiring function composition. For compiling out unary rules generally, it is necessary to consider all existing combinatory rules in the grammar. In our current example grammar, we have four rules to consider in the compilation process: forward and backward (predictive) functional application, and the newly derived predictive function composition rules as well. Subject type- raising can in fact be merged with each of the four corn- binatory rules mentioned to produce four new predictive combinators, each of which have motivation for certain cases of node-raising. Here we will look at just one example, namely, the rule necessary to get leftward "movement" (topicalization and wh- extraction) over subjects. Such a rule can be derived by merging subject type-raising with the right daughter of the new forward-predictive forward function composition rule, maintaining all bindings of variables in the process. This new rule which, in the interest of brevity, we call forward-predictive subject type raising is shown in (13). (13) Forward-predict, lye subJec~ ty'pe raising (fpstr) xl(s/z) m => Xl(m/z) The replacement of subject type raising with the predictive combinator in (13) eliminates spurious derivations such as (7b). Instead, the effects of subject type raising will only be realized in derivations such as (14), which are marked by re- quiring the effects of subject type raising to get a derivation at all. 77 (14) S s/(FVP/SP) ............................... fps~r S/(S/re') ....................... fpfc> S/(F~TIm ~) .............. fpstr S/(s/m ~) ....... tOp ~ FVP/S NP . . . . . . . . . . . . . . . . . . . . . . . Apples he sald John FVP/NP ha~es! The predictive combinator rules in (9), (11), and (13) are examples of a larger set necessary to completely eliminate spurious ambiguity from most Combinatory Categorial Grammars. In the class of function composition rules, we have considered only forward functional composition in this paper, but many published CCG analyses assume rules of backward functional composition as well. As we mentioned, compiling out type-raising rules may involve adding as many new combinators as there axe general combinatory rules in the grammar previously. Other unary rules that produce spurious ambiguity may require even more predictive eom- binators. The rule of subject-introduction proposed in Wit- tenburg (1986a) may be one such example. There are of course costs involved in increasing the size of a rule base by enlarging the grammar through the addition of predictive combinators. However, the size of a rule base is well known to be a constant factor in asymptotic analyses of parsing complexity (and the rule base for Categorial Gram- mars is very small to begin with anyway). On the other hand, the cost of producing spuriously ambiguous derivations with grammars that include functional composition is at least polynomial for the best known parsing algorithms. The reasoning is as follows. Based on the (optimistic) assumption that relevant CCGs are weakly context-free, they are amen- able to parsing in n 3 time by, say, the Esrley algorithm (Earley 1970). 12 As alluded to earlier in footnote 2, "all-ways ambiguous" grammars, a characterization that holds for CCGs that use function composition directly, occasion the worst case for the Earley algorithm, namely n 3. This is be- cause all possible well-formed bracketings of a string are in fact admitted by the grammar in these worst cases (as ex- emplified by (6)) and the best the Earley algorithm can do when filling out, a chart (or its equivalent) in such cir- cumstances is O(n3). The methods presented here for nor- realizing CCGs through predictive combinators eliminate this particular source of worst case ambiguity. Asymptotic pars- ing complexity will then be no better or worse than the grammar and parser yield independently from the spurious ambiguity problem. Further, whatever the worst case results are, there will presumably be statistically fewer instances of the worst cases since an omnipresent source of all-ways am- biguity will have been eliminated. Work on predictive eombinators at MCC is ongoing. At the time of this writing, an experimental algorithm for corn- 12Even if the CCGs in question are not weakly context- free, it is still likely that asymptotic complexity results will be polynomial unless the relevant class is not within that of the limited extensions to context-free grammars that include Head Grammars (Pollard 1984) and TAGs (Joshi 1985). Pol- lard (1984) has a result of n 7 for Head Grammars. piling a predictive form of CCGs, given a base form along the lines of Steedman (1985), has been implemented for CCGs expressed in a PATR-like unification grammar for- malism (Shieber 1984). We believe from experience that our algorithm is correct and complete, although we do not have a formal proof at this point. A full formal characterization of the problem, along with algorithms and accompanying cor- rectness proofs, is forthcoming. Comparison with previous work Previous suggestions in the literature for coping with spurious ambiguity in CCGs are characterized not by eliminating such ambiguity from the grammar but rather by 13 attempting to minimize its effects during parsing. Karttunen (1986) has suggested using equivalence tests during processing; in his modified Earley chart parsing algo- rithm, a subeonstituent is not added to the chart without first testing to see if an equivalent constituent has already been built. 14 In its effects on complexity, this check is really no different than a step already present in the Earley algo- rithm: an Earley state (edge) is not added to a state set (vertex) without first checking to see if it is a duplicate of one already there. 15 The recognition algorithm does nothing with duplicates; for the Earley parsing algorithm, duplicates engender an additional small step involving the placement of a pointer so that the analysis trees can be recovered later. Duplicates generated from functional composition (or from other spurious ambiguity sources) require a treatment no dif- ferent than Earley's duplicates except that no pointers need to be added in parsing-their derivations are simply redun- dant from a semantic point of view and thus they can be ig- nored for later processing. Karttunen's proposal does not change the worst-case complexity results for Earley's algo- rithm used with CCGs as discussed above and thus does not offer much relief from the spurious ambiguity problem. However, parsing algorithms such as Karttunen's that check for duplicates are of course superior from the point of view of asymptotic complexity to parsing algorithms which fail to make cheeks. The latter sort will on the face of it be ex- ponential when faced with ambiguity as in (6) since each of the independent derivations corresponding to the Catalan series will have to be enumerated independently. In earlier work (Wittenburg 1986a, 1986b), I have sug- gested that heuristics used with a best-first parsing algorithm can help cope with spurious ambiguity. It is clear to me now that, while more intelligent methods for directing the search van significantly improve performance in the average case, they should not be viewed as a solution to spurious am- biguity in general. Genuine ambiguity and unparsable input in natural language can force the parser to search exhaus- tively with respect to the grammar. While heuristics used even with a large search space can provide the means for tuning performance for the "best" analyses, the search space itself will determine the results in the "worst" cases. Com- piling the grammar into a normal form based on the notion of predictive eombinators makes exhaustive search more palatable, whatever the enumeration order, since the search 13This characterization also apparently holds for the proposals from Pareschi and Steedman (1987) being presented at this conferenee. 14While Karttunen's categorial fragment for Finnish does not make direct use of functional composition and type rais- ing, it nevertheless suffers from spurious ambiguity of a similar sort stemming from the nature of the categories and functional application rules he defines. 15The n 3 result crucially depends on this check, in fact. 78 space itself is vastly reduced. Heuristics (along with best- first methods generally) may still be valuable in the reduced space, but any enumeration order will do. Thus Earley pars- ing, best-first enumeration, and even LR techniques are still all consistent with the proposal in the current work. ACKNOWLEDGEMENTS The research on which this paper is based was carried out in connection with the Lingo Natural Language Interface Project at MCC. I am grateful to Jim Barnett, Elaine Rich, Greg Whittemore, and Dave Wroblewski for discussions and comments. This work has also benefitted from discussions with Scott Danforth and Aravind Joshi, and particularly from the helpful comments of Mark Steedman. REFERENCES Adjukiewicz, K. 1935. Die Syntaktische Konnexitat. Studia Philosophica 1:1-27. [English translation in Storrs McCall (ed.). Polish Logic 1920-1939, pp. 207-231. Oxford University Press.] Bar-Hillel, Y. 1953. A Quasi-Arithmetical Notation for Syntactic Description. Language 29: 47-58. [Reprinted in Y. Bar-Hillel, Language and Infor- mation, Reading, Mass.: Addison-Wesley, 1964, pp. 61-74. I Curry, H., and R. Feys. 1958. Combinatory Logic: Volume 1. Amsterdam: North Holland. Dowry, D. 1987. Type Raising, Functional Composi- tion, and Non-Constituent Conjunction. To ap- pear in R. Oehrle, E. Bach, and D. Wheeler (eds.), Categorial Grammars and Natural Language Structures, Dordrecht. Earley, J. 1970. An Efficient Context-Free Parsing Al- gorithm. Communications of the ACM 13:94-102. Friedman, J., D. Dai, and W. Wang. 1986. The Weak Generative Capacity of Parenthesis-Free Categorial Grammars. Technical report no. 86-001, Computer Science Department, Boston University, Boston, Massachusetts. Hughes, R. 1982. Super-combinators: a New Im- plementation Method for Applicative Languages. In Symposium on Lisp and Functional Program- ming, pp. 1-10, ACM. Joshi, A. 1085. Tree Adjoining Grammars: How Much Context-Sensitivity is Required to Provide Reasonable Structural Structural Descriptions? In D. Dowry, L. Karttunen, and A. Zwicky (ads.), Natural Language Parsing: Psychological, Com- putational, and Theoretical Perspectives. Cambridge University Press. Karttunen, L. 1986. Radical Lexicalism. Paper presented at the Conference on Alternative Con- ceptions of Phrase Structure, July 1986, New York. Kay, M. 1980. Algorithm Schemata and Data Struc- tures in Syntactic Processing. Xerox Palo Alto Research Center, tech report no. CSL-80-12. Knuth, D. 1975. The Art of Computer Programming. Voh 1: Fundamental Algorithms. Addison Wes- ley. Martin, W., K. Church, and R. Patil. 1981. Preliminary Analysis of a Breadth-First Parsing Algorithm: Theoretical and Experimental Results. MIT tech report no. MIT/LCS/TR-291. Pareschi, R., and M. Steedman. 1987. A Lazy Way to Chart Parse with Categorial Grammars, this volume. Pollard, C. 1984. Generalized Phrase Structure Gram- mars, Head Grammars, and Natural Languages. Ph.D. dissertation, Stanford University. Pollard, C. 1985. Lecture Notes on Head-Driven Phrase Structure Grammar. Center for the Study of Language and Information, Stanford University, Palo Alto, Calif. Sag, I. 1987. Grammatical Hierarchy and Linear Precedence. To appear in Syntax and Semantics, Volume 20: Discontinuous Constituencies, Academic. Shieber, S. 1984. The Design of a Computer Language for Linguistic Information. Proceedings of Coling84, pp. 362-366. Association for Computa- tional Linguistics. Steedman, M. 1985. Dependency and Coordination in the Grammar of Dutch and English. Language 61:523-568. Steedman, M. 1987. Combinators and Grammars. To appear in R. Oehrle, E. Bach, and D. Wheeler (eds.), Categorial Grammars and Natural Lan- guage Structures, Dordrecht. Turner, D. 1979. A New Implementation Technique for Applicative Languages. Software -- Practice and Experience 9:31-49. Uszkoreit, H. 1986. Categorial Unification Grammars. In Proceedings of Coling 1986, pp. 187-194. Wittenburg, K. 1985a. Natural Language Parsing with Combinatory Categorial Grammars in a Graph- Unification-Based Formalism. Ph.D. disser- tation, University of Texas at Austin.[Some of this material is available through MCC tech reports HI-012-86, HI-075-86, and HI-179-86.] Wittenburg, K. 1986b. A Parser for Portable NL Inter- faces using Graph-Unification-Based Grammars. 79 In Proceedings of AAA/-86, pp. 1053-10,58. Zeevat, H., E. Klein, and J. Calder. 1086. Unification Categorisl Grammar. Centre for Cognitive Science, University of Edinburgh. 80
1987
11
A Lazy Way to Chart-Parse with Categorial Grammars Ill Remo Pareschi and Mark Steedman ? Dept. of AI and Centre for Cognitive Science, Univ. of Edinburgh, *? and Dept. of Computer and Information Science, Univ. of Pennsylvania ? ABSTRACT There has recendy been a revival of interest in Categorial Grammars (CG) among computational linguists. The various versions noted below which extend pure CG by including operations such as functional composition have been claimed to offer simple and uniform accounts of a wide range of natural language (NL) constructions involving bounded and unbounded "movement" and coordination "reduction" in a number of languages. Such grammars have obvious advan- tages for computational applications, provided that they can be parsed efficiently. However, many of the proposed extensions engender proliferating semantically equivalent surface syntac- tic analyses. These "spurious analyses" have been claimed to compromise their efficient parseability. The present paper descn~oes a simple parsing algorithm for our own "combinatory" extension of CG. This algorithm offers a uniform treatment for "spurious" syntactic ambiguities and the "genuine" structural ambiguities which any processor must cope with, by exploiting the assodativRy of functional compo- sition and the procedural neutrality of the combinatory rules of grammar in a bottom-up, left-to-fight parser which delivers all semantically distinct analyses via a novel unification-based extension of chart-parsing. 1. Combinatory Categorial Grammars "Pure" categorial grammar (CG) is a grammatical notation, equivalent in power to context-free grammars, which puts all syntactic information in the lexicon, via the specification of all grammatical entities as either functions or arguments. For example, such a grammar might capture the obvious intuitions concerning constituency in a sentence like John must leave by identifying the VP leave and the NP John as the arguments of the tensed verb must, and the verb itself as a function combin- ing to its right with a VP, to yield a predicate -- that is, a leftward-combining function-from-NPs-into-sentences. One common "slash" notation for the types of such functions expresses them as triples of the for~ <result, direction, argu. merit>, where result and argument are themselves syntactic types, and direction is indicated by "/" (for rightward- combining functions) or '~," (for leftward). Must then gets the following type-assignment: (I) must :- (SkNP)/VP In pure categorial grammar, the only other element is a single "combinatory" rule of Functional Application. which gives rise to the following two instances: 1 1 All combinatory roles are written as productions in the present paper, in contrast with the reduction rule notation used in the earlier papers. The change is intended to aid comparison with other tmification-based grammars, and has no theoretical significance. ~) a. Rightward Application: X --> X/Y Y b. Leftward Application: X --> Y X\Y These rules allow functions to combine with inunediam~ adja- cent a~uments in the obv~us way, to ~dd the obv~ sur- face su'ucmres and interpretations, as in: ~) John must leave NP (S\NP)/VP VP ............. >apply S\NP <apply S Combinatory Categorial Grammar (CCG) (Ades and Steedman 1982, Smedman 1985, Smedman 1986) adds a number of further elementary operations on fimcfions and arguments m the combinatory component These operadons conespond to certain of the primitive combinamrs used by Curry and Feys (1958) to define the foundations of the ~calculus, notably including functional composition and "type raising". For example: (4) a. Subject Type Raising: S/(S\NP) B> NP b. Rightward Composition: X/Z --> X/Y Y/Z These combin-tory operations allow additional, non-standard "surface structures" like the following, which arises from the type-raising of the subject John into a function over predicates, which composes with the verb, which is of course a function /no predicates: (5) John must leave NP (S\NP)/VP VP >raise S/(S\NP) .................. >compose S/VP >apply S In general, wherever orthodox surface structure posits a right branching slructure like (a) below, these new operations will allow not only the left branching structure (b), but every mix- lure of right- and left- branching in between: (6) a. s A / B "/... C" ~D 81 b. y,/X'~~ A s ~'B ...~ C ~D The linguistic motivation for including such operations, (and the grounds for contesting the standard linguists' view of sur- face constituency), for details of which the reader is referred to the bibliography, sterns from the possibility of extracting over, and also coordinating, a wide range of such non-standard com- posed structures. A crucial feature of this theory of grammar is that the novel operation of functional composition is assoc/a- tire so that all the novel analyses like (5)are semantically equivalent to the relevant canonical analysis, like O). On the other hand, roles of type raising simply map arguments into functions over the functions of which they are argument, pro- ducing the same result, and thus are by themselves responsible for no change in generative capacity;, indeed, they can simply be regarded as tools which enable functional composition to operate in circumstances where one or both the constituents which need to be combined initially are not associated with a functional type, as when combining a subject NP with the verb which follows it. Grammars of this kind, and the related variety proposed by Karmrmen (1986), achieve simplicity in the grammar of move- ment and coordination at the expense of multiplying the number of derivations according to which an unambiguous suing such as the sentence above can be parsed. While we have suggested in earlier papers (Ades and Steedman 1982, Pareschi 1986) that this property can be exploited for incre- mental semantic interpretation and evaluation, a suggestion which has been explored further by Haddock (1987) and Hin- richs and Polanyi (1986), two potentially serious problems arise from these spurious ambiguities. The fast is the possibil- ity of producing a whole set of semantically equivalent ana- lyses for each reading of a given siring. The second more serious problem is that of efficiently coping with non- determinism in the face of such proliferating ambiguity in sur- face analyses. The problem of avoiding equivalent derivations is common to parsers of all grammars, even context-flee phrase-structure grammars. Since all the spurious derivations are by clef'tuition semantically equivalent, the solution seems obvious: just find one of them, say via a "reduce rast" strategy of the kind pro- posed by Ades and Steedman (1982). The problem with this proposal arises from the fact that, assuming left-to-right pro- cessing, Rightward Composition may preempt the construction of constituents which are needed as arguments by leftward combining functional types. 2 Such a depth-fast processor can- not take advantage of standard techniques for eliminating backtracking, such as chart-parsing (Kay, 1980), because the subconstituents for the alternative analysis will not in general have been built. For example, if we have produced a left- branching analysis like (b) above, and then rind that we need the constituent X in analysis (a) (say to attach a modifier), we will be forced to redo the entire analysis, since not one of the subcoustituents of X (such as Y) was a constituent under the previous analysis. Nor of course can we afford a standard breadth-fast strategy. Karttunen (1986a) has pointed out that a parser which associates a canonical interpretation structure 2 If we had chosen to prc~Js fight-to-left, then an identical problem would arise from the involvement of Leftward Composition. with substzings in a chart can always distinguish a spurious new analysis of the same string from a genuinely different analysis: spurious analyses produce results that are the same as one already installed on the chart. However, the spurious ambiguity problem remains acute. In order to produce only the genuinely distinct readings, it seems that all of the spurious analyses must be explored, even if they can be discarded gain. Even for short strings, this can lead to an unmanageable enlargement of the search space of the processor. Similarly, the problem of reanalysis under backtracking still threatens to overwhelm the parser. In the face of this problem Wittonburg (1986) has recently argued that massive heuristic guidance by strategies quite problematically related to the grammar itself may be required to parse at all with acceptable costs in the face of spurious ambiguities (see also Wittenburg, this conference.) The present paper concerns an alternative unification-based chart-parsing solution which is grammatically transparent, and which we claim to be generally applicable to parsing "genuine" attachment ambiguities, under exteusions to CG which involve associative operations. 2. Unification-based Comblnatory Categorlal Grammars As Kamunen (1986), Uszkoreit (1986), Wittenburg (1986), and Zeevat et al. (1986) have noted, unification-based compu- tational enviroments (Shieber 1986) offer a natural choice for implementing the categories and combination roles of CGs, because of their rigorously dermed declarative semantics. We describe below a unification-besed realisation of CCG which is both transparent to the linguistically motivated properties of the theory of granu'nar and can be directly coupled to the pars- ing methodology we offer further on. 2.1. A Restricted Version of Graph-unification We assume, like all unification formalisms, that grammatical constituents can be represented as feature-structures, which we encode as directed acyclic graphs (dags). A dag can be either:. (i) a constant (ii) a variable (iii) a finite set of label-value pairs (features), where any value is itself a dag, and each label is associated with one and only one value We use round brackets to def'me sets, and we notate features as [label value]. We refer to variables with symbols starting with capital letters, and to labels and constants with symbols start- ing with lower-case letters. The following is an example of a dag: (7) ( [a e] [b ([c x] [d f])]) Like other unification based grammars, we adopt degs as the data-structures encoding categorial feature information because of the conceptual perspicuity of their set-theoretic def'mitio~ However, the variety of unification between dags that we adopt is more resu'ictive than the one used in standard graph-unification formalisms like PATR-2 (Shieber 1986), and closely resembles term-unification as adopted in logic- programming languages. 82 We define unification by first defining a partial ordering of subsumption over dags in a similar (albeit more reslricted) way to previous work discussed in Shieber (1986). A dag D 1 sub- sumes a dag D2 if the information contained in D 1 is a (not necessarily proffer ) subset of the information contaified in D 2. Thus, variables subsume all other dags, as they contain no information at all. Conversely, a constant subsumes, and is subsumed by, itself alone. Finally, subsumptlon between dags which are feature-sets is defined as follows. We refer to two feature-sets D 1 and D? as variants of each other if there is an isomorphism d mapphSg each feature in D 1 onto a feature with the same label in D 9. Then a feature-set D 1 subsumes a feature-set D 2 if and oilly if: (i) D 1 and D 2 are variants; and (ii) if o~ f ), where fis a feature in D 1 and f is a feature in D 2, then the value off subsumes tile value off. The unification of two dags D 1 and D,~ is then def'med as the most general dag D which is subsume?d by beth D 1 and D 2. Like most other unification-based approaches, we assume that from a procedural point of view, the process of obtaining the unification of two dags D 1 and D 9 requires that they be des- tructively modified to becfime the-same dag D. (We also use the term unification to refer to this process.) For example let D 1 and D 2 be the two following dags: (g) ([a ([b c])] ([a Y] [d g] [d z] [e X]) [e z]) Then the following dag is the unification of D 1 and D2: (9) ( [a ( ['b c] ) ] [d g] [e g] ) However, under the present definition of unification, as opposed to the more general PATR-2 def'mition" the above is not the unification of the following pair of dags: (10) ([a ([b c])] ([d Z] [d g]) [e z]) These two dags are not unifiable in present terms, because under the above clef'tuition of suhsumption" unification of two feature sets can only succeed if they are variants. It follows that a dag resulting from unification must have the same feature population as the two feature su-uctures that it unifies. The present clef'tuition of unification thus resembles term unifi- cation in invariably yielding a feature-set with exactly the same structure as both of the input feature-sets, via the insten- tiation of variables. The only difference from standard term unification is that it is defined over dags, rather than standard terms. By contrast, standard graph-unification can yield a feature-set containing features initially entirely missing from one or other of the unified feature-sets. The significance of this point will emerge later on, in the discussions of the procedural neutrality of combinatory rules in section 2.4, and of the related transparency property of functional categories in sec- tion 2.3. Since the properties in question inhere to the gram- mar itself, to which unification is merely transparent, there is nothing in our approach that is incompatible with the more general definition of graph unification offered by PATR-2. However, in order to establish the correctness of our proposal for efficient parsing of extended categorial grammars using the more general definition" we would have had to neutralise its greater power with more laborious constraints on the encoding of entries in the categorial lexicon as dags than those we actu- ally require below. The more restricted version we propose preserves most of the advantages of gjraph over term data- su'uctures pointed out in Shieber (1986)/ 2.2. Categories as Features Structures We encode constituents corresponding to non-functional categories, such as the noun-phrases below, as feature-sets defining the three major attributes syraax, phonology and senmntics, abbreviated for reasons of space to syn, pho, and son (the examples of feature-based categories given below are of course simplified for the purposes of concise exposition -- for instance, we omit any specification of agreement informa- tion in the value associated with the syn(tax) label): (II) John:- ([syn np] [pho john] [sem john' ] ) (12) Mary:- ( [syn np] [pho mary] [sem mary' ] ) Constituents corresponding to functional categories are feature-sets characterized by a triple of am-ibutes, result, direc. t/on, end argument, abbreviated to res, dir, and ar 8. The value associated with dir(ection) can be instantiated to one of the constants / and \ and the values associated with res(ult) and arg(ument) can be associated with any functional or non- functional category. (Thus our functions are "curried", and may be higher order.) We impose the simple but crucial requirement of transparency over the well-formedness of functional categories in fcamre- based CCG. Intuitively, this requirement corresponds to the idea that any change to the structure of the value of arg(ument) caused by unification must be reflected in the value of res(ult). Given the definition of unification in the section above, this requirement can be simply stated as follows: (13) Functional categories must be transparent, in the sense that every uninstantiated feature in the value of a function's arg(ument) feature - that is, every feature whose value is a variable -- must share that variable value with some feature in the value of the function's res( ult) feature. Thus, whenever a feature in a function's arg(ument) is instan- tiated by unification, some other feature in its res(uh) will be iastantiated identically, as a side-effect of the destructive replacement of structures imposed by unification. Variables in the value of the arg(ument) of a functional category therefore have the sole effect of increasing the specificity of the informa- tion contained in the value of its res(uh). As the combinatory rules of CCG build new constituents exclusively in terms of information already contained in the categories that they com- bine, a requirement that all the functional categories in the lex- icon be transparent in mm guarantees the transparency of any functional category assigned to complex constituents generated by the grammar. 3 Calder (1987) and Thompson (1987) have independently motivated similar approaches to constraining unification in encoding 83 The fotlowing feature-based functional category for a lexical =ansitive tensed verb obeys the ~ransparency requiremem (the operator * indicates suing concatenation): (14) loves :- ([res ([res ([syn s] [pho Pl*loves*P2] [sem ( [act loving] [agent S1 ] [patient $2] ) ] } ] [air \] [arg ([syn np] [pho P1 ] [sem SI])] )] [dir /] [arg ([syn np] [pho P2] [sem $2] ) ] ) When two adjacent feamre-su~ctures corresponding to a func- tion category X 1 and an argument X 9 are combined by func- tional application, a new feature-strucfin'e X 0 is constructed by unifying the argument feature-su'ucture X 2 with the value of the arg(ument) in the function feature s~'ucture X 1. The result X n is then unified with the res(~dt) of the function. For exam- pl~., Rightward Application can be expressed in a notation adapted from PATR-2 as follows. We use the notation <I 1 ... 1~> for a path of feature labels of length n, and we identif]7 as Xn(<11 ... I_>) the value associated with the feature identified by-the-path"<11 ... 1.> in the dag corresponding to a category X_. We indicate udification with the equality sign, =. Right- w~rd Application can then be written as: (15) Rightward Application: X 0 --> X 1 X 2 X 1 (<direction>) - / X 1 (<arg>) : X 2 X 1 (<result>) X 0 Application of this rule to the functional feature-set (14) for the transitive verb loves and the feature-set (12)for the noun- phrase Mary yields the following structure for the verb.phrase loves Mary: (16) loves Mary:- ([res ([syn s] [pho Pl*loves*mary] [sem ( [act loving] [agent S1 ] [patient mary' ] ) ]) ] [dir \] [arg ([syn np] [pho PI] [sem Sl] ) ] ) To rightward-compose two functional categories according m rule (4b), we similarly unify the appropriate ar&(ument) and res(ult) features of the input functions according to the follow- ing rule: linguistic theories. (17) Rightward Composition: X 0 --> X 1 X 2 X 1 (<direction>) - / X 2 (<direction>) i / X 1 (<arg>) X 2 (<result>) X 2 (<direction>) X 0 (<direction>) X 1 (<result>) X 0 (<result>) X 2 (<arg>) X 0 (<arg>) For example, suppose that the non-functional feature-set (II) for the noun-phrase John is type-raised into the following functional feature-set, according to rule (4a), whose unification-based version we omit here: (is) John : -- (Ires ([syn s] [pho P] [sem S])] [air /] [arg ([res ( [syn s] [pho P] [sem S] ) ] [dir \] [arg ([syn np] [pho john] [sem john']) ]) 1) Thin (18)can be combined by Rightward Composition with (14) to obtain the following feature structure for the functional category corresponding to John love~. (19) John loves :- ([res ([syn s] [pho john*loves*P2] [sem ([act loving] [agent john'] [patient $2])])] [dir /] [arg ([syn np] [pho P2 ] [sem $2])1) Leftward-combining rules are defined analogously to the rightward-combining rules above. 2.3. Derivational Equivalence Modulo Composition Let us denote the operations of applying and composing categories by writing apply(X, Y) and comp(X, Y) respec- tively. Then by the definition of the operations themselves, and in particular because of the associativity of functional composition, the following equivalences hold across type- derivations: (20) apply (comp (X 1, X2), X3) apply (X I, apply~X 2, X 3) ) (21) comp(comp(X4, X5) , X6) - comp(X4, comp(X 5, X6)) More formally, the left-hand side and right-hand side of both equations define equivalent terms in the combinatory logic of 84 Curry and Feys (1958). 4 It follows that all alternative deriva- tions of an arbitrary sequence of functions and arguments that are allowed by different orders of application and composition in which a composition is merely traded for an,~pplication also define equivalent terms of Combinatory Logic." So. for instance, a type for the sentence John loves Mary can be assigned either by rightward-composing the type-raised function John, (18), with loves. (14), to obtain the feature- structure (19)for John loves, and then rightward applying (19) to Mary, (12). to obtain a feature-structure for the whole sentence; or. conversely, it can be assigned by rightward- applying loves. (14), to Mary, (12), to obtain the feature- structure (16)for loves Mary, and then rightward-applying John. (18). to (16) to obtain the final feamre-su'ucmre. In both cases, as the reader may care to verify, the type-assignment we get is the following: (22) John loves Mary:- ([syn s] [pho john*loves*mary] [sem ([act loving] [agent john' ] [patient mary' ] ) ] ) An important property of CCO is that it unites syntactic and semantic combination in uniform operations of application and composition. Unification-based CCG makes this identification explicit by uniting the syntactic type of a constituent and its interpretation in a single feature-based type. It follows that all derivations for a given suing induced by functional composi- tion correspond to the same unique feature-based type, whic~ cannot be assigned to any other constituent in the grammar." This property, which we characterize formally elsewhere, is a direct consequence of the fact that unification is itself an asso- ciative operation. It follows in turn that a feature-based category like (22) associ- ated with a given constituent not only contains all the informa- tion necessary for its grammatical interpretation, but also determines an equivalence class of derivations for that consti- tuent, a point which is related to Karttunen's (1986) proposal for the spurious ambiguity problem (cf. secn. 1 above), but which we exploit differently, as follows. 2.4. Procedural Neutrality of Combinatory Rules The rules of combinatory eategorial grammar are purely declarative, and unification preserves this property, so that, as with other unification-based grammatical formalisms (cf. Shieber 1986). there is no procedural constraint on their use. So far. we have only considered examples in which such rules are applied "bottom-up", as in example (16). in which the rule of application (15) is used to define the feature structure X 0 on the left-hand side of the rule in terms of the feature structures 4 The terms are equivalent in the technical sense that they reduce to an identical normal form. 5 The inclusion of certain higher-order function catesories in the lexicon (of which "modifiers of modifiers" Hkeformerly would be an example in English) means that composition may affect the argu- ment structure itself, thereby changing me.~ning and giving rise to non-equivalent terms. This possibility does not affect the present pro- posal, ~d can be ignored. o If there is genuine ambiguity, a constitoent will of course he assigned more than one type. X 1 and X 2 on the fight, respectively instantiated as the func- tion loves (14)and its argument Mary ~12). However, other procedural realizations are equally viable.' In particular, it is a property of rules (15)and (17), (and of all the cumbinatory rules permitted in the theory -- of. Steedman 1986) that if any two out of the three elements that they relate are specified, then the third is entirely and uniquely determined. This property, which we call procedural neutrality follows from the form of the rules themselves and from the transparency property (13) of functional categories, t~ier the definition of unifica- tion given in section 2.1 above." This property of the grammar offers a way to short-circuit the entire problem of non-determinism in a chart-based parser for grammars characterised by spurious analyses engendered by associative rules such as composition. The procedural neutral- ity of the combinatory rules allows a processor to recover con- stituents which are "implicit" in analysed constituents in the sense that they would have been built if some other equivalent analysis had happened to have been the one followed by the processor. For example, consider the situation where, faced with the suing John loves Mary dealt with in the last section, the processor has avoided multiple analyses by composing John, (18), with loves, (14), to obtain John loves, (19), and has then applied that to Mary, (12), to obtain John loves Mary (22), ignoring the other analysis. If the parser rams out to need the constituent loves Mary, (16), (as it will ff it is to find a sensible analysis when the sentence turns out to be John loves Mary mad/y), then it can recover that constituent by clef'ruing it via the rule of Rightward Application in terms of the feature structures for John loves Mary, (22), and John, (18). These two feature structures can be used to respectively instantiate X 0 and X I in the rule as stated at (15). The reader may verify tl~t instanttating the rule in this way determines the required con- stituent to be exactly the same category as (16). This particular procedural alternative to the bottom-up invoca- tion of combinatery rules will be central to the parsing algo- rithm which we present in the following section, so it will be convenient to give it a name. Since it is the "parent" category X 0 and the "left-constituent" category X l that are instantiated, it seems natural to call this alternative l~ft-branch instantla- tlon of a combinatory rule, a term which we contrast with the bottom-up instantlatlon invoked in earlier examples. The significance of this point is as follows. Let us suppose that we can guarantee that a parser will always make available, say in a chart, the constituent that could have combined under 7 There is an obvious analogy here with the fact that unification-based programming languages like Prolog do not have any predefmed distinction between the input and the output parameters of • given l~r~uw- From a formal point of view, procedural neutrality is • conse- quence of the fact that unification-based combinatory roles, as charac- terised above, are e.xJens/ona/. Thus, we follow Pereira and Shieher (1984) in claiming that the "bottom-up" realization of a unification- based rule • corresponds to the unification of a structure E• encoding the equational constraints of r, and a structure D r corresponding to the merging of the structures instentiating the elemcnu of the right-hand side of r. A stmcmreN r is consequently assigned as the insumtiation of the left-hand side of • by individuating a relevant substructure of the unification of the pair <D. E >. If • is a rule of unification-based f- • . . . CCG, then the fact that N_ ts the mstanuauon of the left-hand side of • • r , beth m terms of <D_ Er> and <D E • guarantees that D and D ' • . . F r' • • • are tdenucal (m the sense that they subsume each other). 85 bottom-up instantiation as a left-cenatiment with an implicit fight-constituent to yield the same result as the analysis that was actually followed. In that case, the processor will be able to recover the implicit right-constituent by left-branch instan- tiation of a single combinatory rule, without restarting syntac- tic analysis and without backtracking or search of any kind. The following algorithm does just that. 3. A Lazy Chart Parsing Methodology Derivafional equivalence modulo composition, together with the procedural neutrality of unification-based combinatory rules, allows us to def'me a novel generalisadon of the classic chart parsing technique for extended CGs, which is "lazy" in the sense that: a) only edges corresponding to one of the set of semanti- cally equivalent analyses are installed on the chart; b) surface constituents of already parsed parts of the input which are not on the chart are directly generated from the structures which are, rather than being built from scratch via syntactic reanalysis. 3.1. A Bottom-up Left-to-Right Algorithm The algorithm we decribe here implements a bottom-up, left- to-right parser which delivers all semantically distinct ana- lyses. Other algorithms based on alternative control strategies are equally feas~le. In this specific algorithm, the distinction between active and inactive edges is drawn in a rather diffeae+Lt way from the standard one. For an edge E to be active does not meanthat it is associated with an incomplete constituent (indeed, the distinction between complete and incomplete con- stituents is eliminated in CCG); it simply means that E can Irigger new actions of the parser to install other edges, after which E itself becomes inactive. By contrast, inactive edges cannot initiate modifications to the state of the parser. Active edges can be added to the chart according to the three following actions: Scanning: if a is a word in the input string then, for each lexical entry X associated with a, add an active edge labeled X spanning the vertices corresponding to the position of a on the chart. Lifting: if E is an active edge labeled X 1. then for every unary lrule of type raising which can-be instan- tiated as X O ~> X 1 add an active edge E 0 labeled X 0 and spannifig the sanie vertices of E 1. Reducing: if an edge E 9 labeled X 9 has a left-adjacent edge E 1 labeled X I aKd there is ~ combinatory rule which c-an be instanfiated as X 0 --~---> X 1 X~ then add an active edge E 0 labeled X n spanning fife sr3rting ver- tex of E 1 and the ending ver~x F.. 2. The operational meaning of Scanning and Lifting should be clear enough. The Reducing action is the workhorse of the parser, building new constituents by invoking combinatory rules via bottom-up instantiadon. Whenever Reducing is effected over two edges E 1 and E 2 to obtain a new edge E 0 we ensure that: E l is marked as a left-generator of E N. If the rule in the gr'~mmar which was used is RightWard Composition, then E 2 is marked as a right-generator of E 0. The intuition behind this move is that right.generators are rightward functional categories which have been composed into, and will therefore give rise to spurious analyses ff they take part in further rightward combinations, as a consequence of the property of derivational equivalence modulo composi- tion, discussed in section 2.3. Left-generators correspond instead to choice points from where it would have been possi- ble to obtain a derivationally different but semantically equivalent constituent analysis of some part of the input string. They thus constitute suitable constituents for use in recovering /mpl/c/t right-constituents of other constituents in the chart via the invocation of combinatory rules under the procedure of left-branch instantiation discussed in the last section. In order to state exactly how this is done, we need to introduce the left-starter relation, corresponding to the lransitive closure of the left-generator relation: (i) A left-generator L of an edge E is a left-starter of E. (ii) If L is a left-sterter of E, then any left-starter of L is a left-stsrter of E. The parser can now add inactive edges cones~nding to impli- c/t right-constituents according to the fonowing action: Revealing:. if an edge E is labeled by a leftward-looking functional type X and there is a combinatory rule which can be instantiated esX' ~> X2Xthenif (i) there is an edge E 0 labeled Xn left-adjacent to E (ii) E 0 has a left-starter E 1 labele~ X 1 (iii) there is a combinatory'rule which'can be instantiated esX 0 ~ XIX 2 then add to the chart an inactive edge E 2 labeled X~ spanning the ending vertex of E 1 and the starting vertex of E, unless there is already an e~ige labelled in the same way and spanning the same vertices. Mark E?as a right-generator of E 0 if the rule used in (iii) was'Righi- ward Composition. To summarise the section so far:. if the parser is devised so as to avoid putting on the chart subeonsfiments which would lead to redundant equivalent derivations, non-determiuism in the grammar will always give rise to cases which require some of the excluded constituents. In a left-to-right processor this typi- cally happens when the argument required by a leftward- looking fimctional type has been mistakenly combined in the analysis of a substring left-adjacent to that leftward-looking type. However, such an implicit or hidden constituent could have only been obtained through an equivalent derivation path for the left-adjacent substring. It follows that we can "reveal" it on the chart by invoking a combinatory rule in terms of left- branch instantiation. We can now informally characterize the algorithm itself as fol- lows: the parser does Scanning for each word in the input string going left-to-right moreover, whenever an active edge A is added to the chart, then the following actions are taken in order. (i) the parser does Lifting over A (ii) if A is labeled by a leftward-looking type, then for every edge E left-adjacant to A the parser does Revealing over E with respect to A 86 (iii) for every edge E left-adjacent to A the parser does Reducing over E and A, with the constraint that ff A is not labeled by a leftward-looking type then E must not be a right-generator of any edge E' the parser returns the set of categories associated with edges spanning the whole input, if such a set is not empty; it fails otherwise,. 3.2. An Example In the interests of brevity and simplicity, we eschew all details to do with unifieafion itself in the following examples of the workings of the parser, reverting to the original categorial notation for CCG of section 1, bearing in mind that the categories are now to be read strictly as a shorthand for the fuller notation of un/fication-based CCG. For similar reasons of simplicity in exposition, we assume for the present purpose that the only type-raising rule in the grammar is the subject rule (4a). The algorithm analy~es the sentence John loves Mary madly as follows. First, the parser Scans the first word John, ed~g to the chart an active NP edge corresponding to its sole lexical entry, and spanning the word in question, thus: (23) • Jo...Z~._~ • NP (We adopt the convention that active edges are indicated by upper-case categories, while inactive edges will be indicated with lower-easo categories.) Since the edge in question is active, it fails under the second clause of the algorithm. The Lifting condition (i) of this clause applies, since there is a rule which type raises over NP, so a new active edge of type S/(S~rP) is added, spanning the same word, John (no other conditions apply to the NP active edge, and it becomes inac- tive): (24) .,~! (S\NP) np Neither Lifting. Revealing, nor Reducing yield any new edges, so the new active edge merely becomes inactive. The next word is Scanned to add a new lexical active edge of type (S~NP)/NP spanning loves:. (25) s/(s\np) ~ ~ loves . The new lexical edge Reduces with the type-raised subject to yield a new active edge of type S/NP. The subject category is marked as the new edge's left-generator, and (because the combinatory rule was Rightward Composition) the verb category is marked as its right-generator. Nothing more results from loves, and neither Lifting, Revealing nor Reducing yield anything from the new edge, so it too becomes inactive, and the next word is Sc~rmed to add a new lexical active NP edge corresponding to Mary: (26) ~ / n p np ( s \n~/np NP This edge yields two new active edges before becoming inac- five, one of type S/(S~P) via Lifting and the subject rule, and one of type S, via Reducing with the s/np edge to its left by the Forward application rule (we omit the former from the illustra- lion, because nothing further happens to it, but it is there nonetheless): ~ The s/np edge is in addition marked as the left generator of the S. Note that Reducing would potentially have allowed a third new active edge corresponding to loves Mary to be added by Reducing the new active NP edge corresponding to Mary with the left-adjacent (s~np)/np edge, loves. However. this edge has been marked as a right generator, and is therefore not allowed to Reduce by the algorithm. Nothing new results from the new active S edge, so it becomes inactive and the next word mad/y is scanned to add a new active e d g ~ (28) ~__~/~~/np :~ohpg~ loves ~. ~...~ madly . ( s \np~--/np ~ (S \ N-~[~--~S \NP ) This active edge, being a leftward=looking functional type, pre- cipitates Revealing. Since there is a rule (Backward Applica- tion. 2a) which would allow madly, (S~IP)~(S~IP) to combine with a left-adjacent s~np, and there is a rule (Forwards Appli- cation, 2a) which would allow a left-starter John .... ~hine with ~h en ,~p to yield the s which is le~-~ to madly, (and since there is no left-adjacent s~np there already), the rule of Forward Application can be invoked via Left-branch Instantiation to Reveal the inactive edge loves Mary, s ~ p . ~ ~ ' ~ , ~ ..- ~,.~-,. .o,,.,,,. ~-,, ,. ~a.~._~..~._.~. ~ ( S \ N P ) \ (S\NP) The (still) active backward modier mad/y can now Reduce with the newly introduced s~mp, to yield a new active edge S~P corresponding to loves Mary madly, before becoming inactive: ~ (30) ///~/,/cs\~p~ ~',,o/np ",~ .'/John TM.~ loves~._ Marg~..._Lmadly ~. The new active edge potentially gives rise to two semantically equivalent Reductions with the subject John to yield S -- one with its ground np type, and one with its raised type, s/(s~np). Only one of these is effected, because of a detail dealt with in the next section, and the algorithm terminates with a single S edge spanning the str/n~" ~. np ~npl/np np_/(s\np) \ (s\npJ/ In an attachment-ambiguous sentence like the following, which we leave as an exercise, two predicates, believes John loves Mary and loves Mary. are revealed in the penultimate stage of the analysis, and two semantically distinct analyses result" (32) Fred believes John loves Mary passionately Space permits us no more than to note that this procedure will 87 also cope with another class of constructions which constitute a major source of non-determinism in natural language pars- ing, namely the diverse coordinate constructions whose categorial analysis is discussed by Dowty (1985) and Steed- man (1985, 1987). 4. Type Raising and Spurious Ambiguity As noted at example (30) above, type raising rules introduce a second kind of spurious ambiguity connected to the interac- tions of such rules with functional application rather than func- tional composition. If the processor can Reduce via a rule of application on a type.raised category, then it can also always invoke the opposite rule of appHcaton to the u~aised version of the same category to yield the same result. Spurious ambi- guity of this kind is trivially easy to avoided, as (u~l~e the kind associated with composition), it can always be detected locally by the following redundancy check on attachment of new edges to the chart in Reducing: when Reducing creates an edge via functional application, then it is only added to the chart if there is no edge associated with the same feature structure and spanning the same vertices already on the chart. 5. Alternative Control Strategies and Grammatical For- mailsms The algorithm described above is a pure bottom-up parsing procedure which has a close relative in the Cocke-Kasami- Younger algorithm for context-free phrase-strucnne grammars. However, our chart-parsing methodology is completely open to alternative control options. In particular, Pareschi (forthcom- ing) describes an adaptation of the Farley algorithm, which, in virtue of its top-down prediction stage, allows for efficient application of more genera] type-raising rules than are con- sidered here. Formal proofs of the correcmess of both these algorithms wili be presented in the same reference. The possibility of exploiting this methodology for improving processing of other unification-based extensions of CG involv- ing spurious ambiguity, like the one reported in Kartmnen (1986a), is also under exploration. 6. Conclusion The above approach to chart-parsing with extensions to CGs characterised by spurious ambiguities allows us to def'me algo- rithms which do not build significantly more edges than chart parsers for more standard theories of grammar. Our technique is fully transparent with respect to our grammatical formalism, since it is based on properties of associativity and procedural neutrality inherent in the grammar itself. 9 ACKNOWLEDGEMENTS We thank Inge Bethke, Kit F'me, Ellen Hays, Aravind Joshi, Dale Miller, Henry Thompson, Bonnie Lynn Webher, and Kent Wittenberg for help and advice. Parts of the research were supported by: an Edin- burgh Univeni W Research Studentship; an ESPRIT grant (project 393) to CCS, Univ. Edinburgh; a Sloan Foundation grant to the Cognitive Science Program, Univ. Pennsylvania; and NSF grant IRI-10413 A02. ARO grant DAA6-29- 84K-0061 and DARPA grant N0014-85-K0018 to CIS, Univ. Pennsylvania. 9 Chart parsers based on the methodology described here and written in Quintus Prolog have been developed on a Sun workstation. REFERENCES Ades, A. and Steedman, M. J. (1982) On the Order of Words. Linguistics and Philosophy, 44, 517-518. Calder, J. (1987) Typed Unification for Natural Language Processing. Ms, Univ. of Edinburgh Curry, H. B. and Feys, R. (1958) Combinatory Logic, Volume I. Amsterdam: North Holland. Dowry, D. (1985). Type raising, functional composition and non-constituent coordination. In R. Oehrle et al, (eds.), Categorial Grammars and Natural Language Structures, Durdrecht, Reidel. (In press). Haddock, N. J. (1987) Incremental Interpretation and Combinatory Categorial Grammar. In Proceedings of the Tenth International Joint Conference on Artifi- cial Intelligence, Milan, Italy, August, 1987. Hinrichs, E. and Polanyi, L. (1986) Pointing the Way. Papers from the Parasession on Pragrnatics and Grammatical Theory at the Twenty-Second Regional Meeting of the Chicago Linguistic Society, pp.298-314. Karttunen, L. (1986) Radical Lexicalism. Paper presented at the Conference on Alternative Conceptions of Phrase Structure, July 1986, New York. Kay, M. (1980) Algorithm Schemata and Data Structures in Syntactic Processing. Technical Report No. CSL-80- 12, XEROX Palo Alto Research Centre. Pareschi, Remo. 1986. Combinatory Categorial Grammar, Logic Programming, and the Parsing of Natural Language. DAI Working Paper, University of Edinburgh. Pareschi, R. (forthcoming) PhD Thesis, Univ. Edinburgh. Pereint, F. C. N. and Shieber, S. M. (1984) The Semantics of Grammar Formalisms Seen as Computer Languages. In Proceedings of the 22rid Annual Meeting of the ACL, Stanford, July 1984, pp.123-129. Shieber, S. M. (1986) An Introduction to Unification-based Approaches to Grammar, Chicago: Univ. Chicago Press. Stcedman, M. (1985) Dependency and Coordination in the Grammar of Dutch end English. Language, 61,523-568. Steedmen,M. (1986) Combinatory Grammars and Parasitic Gaps. Natural Language and Linguistic Theory, to appear. Steedman, M. (1987) Coordination and Constituency in a Combinatory Grammar. In Mark Baltin and Tony Kroch. (eds.), Alternative Conceptions of Phrase Structure, University of Chicago Press: Chicago. (To appear.) Thompson. H. (1987) FBF- An Alternative to PATR as a Grammatical Assembly Language. Research Paper, Department of A.I, Univ. Edinburgh. Uszkoreit, H. (1986) Categorial Unification Grammars. In Proceedings of the l lth International Conference on Computational Linguistics, Bonn, August. 1986, pp187- 194. Wittenburg, K. W. (1986) Natural Language Parsing with Combinatory Categorial Grammar in a Graph- Unification-Based Formalism. PhD Thesis, Deparunem of Linguistics, University of Texas. Zeevat, H., Klein, E. and Calder, J. (1987) An Introduction to Unification Categorial Grammar. In N. Haddock et al. (eds.), Edinburgh Working Papers in Cognitive Science, 1: Categorial Grammar, Unification Grammar, and Pars- ing. 88
1987
12
A LOGICAL VERSION OF FUNCTIONAL GRAMMAR William C. Rounds University of Michigan Xerox PARC Alexis Manaster-Ramer IBM T.J. Watson Research Center Wayne State University I Abstract Kay's functional-unification grammar notation [5] is a way of expressing grammars which relies on very few primitive notions. The primary syntactic structure is the feature structure, which can be visualised as a directed graph with arcs labeled by attributes of a constituent, and the primary structure-building operation is unification. In this paper we propose a mathematical formulation of FUG, using logic to give a precise account of the strings and the structures defined by any grammar written in this notation. 2 Introduction Our basic approach to the problem of syntactic de- scription is to use logical formulas to put conditions or constraints on ordering of constituents, ancestor and de- scendant relations, and feature attribute information in syntactic structures. The present version of our logic has predicates specifically designed for these purposes. A grammar can be considered as just a logical formula, and the structures satisfying the formula are the syntactic structures for the sentences of the language. This notion goes back to DCG's [0], but our formulation is quite dif- ferent. In particular, it builds on the logic of Kasper and Rounds [3], a logic intended specifically to describe fea- ture structures. The formulation has several new aspects. First, it introduces the oriented feature structure as the primary syntactic structure. One can think of these structures as parse trees superimposed on directed graphs, although the general definition allows much more flexibility. In fact, our notation does away with the parse tree alto- gether. A second aspect of the notation is its treatment of word order. Our logic allows small grammars to define free-word order languages over large vocabularies in a way not possible with standard ID/LP rules. It is not clear whether or not this treatment of word order was intended by Kay, but the issue naturally arose during the process of making this model precise. (Joshi [1] has adopted much the same conventions in tree adjunct grammar.) A third aspect of our treatment is the use of fixed- point formulas to introduce recursion into grammars. This idea is implicit in DCG's, and has been made explicit in the logics CLFP and ILFP [9]. We give a simple way of expressing the semantics of these formulas which corre- sponds closely to the usual notion of grammatical deriva- tions. There is an interesting use of type ~ariables to describe syntactic categories and/or constructions. We illustrate the power of the notation by sketching how the constructions of relational grammar [7] can be formulated in the logic. To our knowledge, this is the first attempt to interpret the relational ideas in a fully mathematical framework. Although relational networks themselves have been precisely specified, there does not seem to be a precise statement of how relational deriva- tions take place. We do not claim that our formalization is the one intended by Postal and Perlmutter, but we do claim that our notation shows clearly the relationship of relational to transformational grammars on one hand, and to lexical-functional grammars on the other. Finally, we prove that the satisfiability problem for our logic is undecidable. This should perhaps be an expected result, because the proof relies on simulating Turing ma- chine computations in a grammar, and follows the stan- dard undecidability arguments. The satisfiability prob- lem is not quite the same problem as the aniversal recog- nition problem, however, and with mild conditions on derivations similar to those proposed for LFG [2], the latter problem should become decidable. We must leave efficiency questions unexamined in this paper. The notation has not been implemented. We view this notation as a temporary one, and anticipate that many revisions and extensions will be necessary if it is to be implemented at all. Of course, FUG itself could be considered as an implementation, but we have added the word order relations to our logic, which are not explicit in FUG. In this paper, which is not full because of space limi- tations, we will give definitions and examples in Section 3; then will sketch the relational application in Section 4, and will conclude with the undecidability result and some final remarks. 3 Definitions and examples 3.1 Oriented f-structures In this section we will describe the syntactic structures to which our logical formulas refer. The next subsection 89 obi,e de~.,. ,C Figure i: A typical DG. Figure 2: An oriented f-structure for a4b4c 4. will give the logic itself. Our intent is to represent not only feature information, but also information about or- dering of constituents in a single structure. We begin with the unordered version, which is the simple DG (directed graph) structure commonly used for non-disjunctive in- formation. This is formalized as an acyclic finite automa- ton, in the manner of Kasper-Rounds [3]. Then we add two relations on nodes of the DG: ancestor and linear precedence. The key insight about these relations is that they are partial; nodes of the graph need not participate in either of the two relations. Pure feature information about a constituent need not participate in any ordering. This allows us to model the "cset" and "pattern" infor- mation of FUG, while allowing structure sharing in the usual DG representation of features. We are basically interested in describing structures like that shown in Figure i. A formalism appropriate for specifying such DG struc- tures is that of finite automata theory. A labeled DG can be regarded as a transition graph for a partially speci- fied deterministic finite automaton. We will thus use the ordinary 6 notation for the transition function of the au- tomaton. Nodes of the graph correspond to states of the automaton, and the notation 6(q, z) implies that starting at state(node) q a transition path actually exists in the graph labeled by the sequence z, to the state 6(q, z). Let L be a set of arc labels, and A be a set of atomic feature values. An ( A, L)- automaton is a tuple .4 = (Q,6,qo, r) where Q is a finite set of states, q0 is the initial state, L is the set of labels above, 6 is a partial function from Q x L to Q, and r is a partial function from terminating states of A to A. (q is terminating if 6(q, l) is undefined for all l • L.) We require that ,4 be connected and acyclic. The map r specifies the atomic feature values at the final nodes of the DG. (Some of these nodes can have unspecified values, to be unified in later. This is why r is only partial.) Let F be the set of terminating states of.A, and let PC.A) be the set of full paths of,4, namely the set {z • L* : 6(q0, z) • F}. Now we add the constituent ordering information to the nodes of the transition graph. Let Z be the termi- nal vocabulary (the set of all possible words, morphemes, etc.) Now r can be a partial map from Q to E u A, with the requirement that if r(q) • A, then q • F. Next, let a and < be binary relations on Q, the ancestor and precedence relations. We require a to be reflexive, an- tisymmetric and transitive; and the relation < must be irrefiexive and transitive. There is no requirement that any two nodes must be related by one or the other of these relations. There is, however, a compatibility constraint between the two relations: v(q, r, 8, t) • Q : (q < ~) ^ (q a s) ^ (~ a t) = s < t. Note: We have required that the precedence and dom- inance relations be transitive. This is not a necessary requirement, and is only for elegance in stating condi- tions like the compatibility constraint. A better formula- tion of precedence for computational purposes would be the "immediate precedence" relation, which says that one constituent precedes another, with no constituents inter- vening. There is no obstacle to having such a relation in the logic directly. Example. Consider the structure in Figure 2. This graph represents an oriented f-structure arising from a LFG-style grammar for the language {anb"c n I n > I}. In this example, there is an underlying CFG given by the following productions: S -- TC T-- aTb lab C--cClc. The arcs labeled with numbers (1,2,3) are analogous to arcs in the derivation tree of this grammar. The root node is of "category" S, although we have not represented this information in the structure. The nodes at the ends of the arcs 1,2, and 3 are ordered left to right; in our logic this will be expressed by the formula I < 2 < 3. The other arcs, labeled by COUNT and #, are feature 90 arcs used to enforce the counting information required by the language. It is a little difficult in the graph repre- sentation to indicate the node ordering information and the ancestor information, so this will wait until the next section. Incidentally, no claim is made for the linguistic naturalness of this example! 3.2 A presentation of the logic We will introduce the logic by continuing the exam- ple of the previous section. Consider Figure 2. Particu- lar nodes of this structure will be referenced by the se- quences of arc labels necessary to reach them from the root node. These sequences will be called paths. Thus the path 12223 leads to an occurrence of the terminal symbol b. Then a formula of the form, say, 12 COUNT - 22 COUNT would indicate that these paths lead to the same node. This is also how we specify linear precedence: the last b precedes the first c, and this could be indicated by the formula 12223<22221. It should already be clear that our formulas will de- scribe oriented f-structures. We have just illustrated two kinds of atomic formula in the logic. Compound formulas will be formed using A (and), and V (or). Additionally, let I be an arc label. Then an f-structure will satisfy a for- mula of the form I : ¢, iff there is an/-transition from the root node to the root of a substructure satisfying ~b. What we have not explained yet is how the recursive informa- tion implicit in the CFG is expressed in our logic. To do this, we introduce type variables as elementary formulas of the logic. In the example, these are the "category" variables S, T, and C. The grammar is given as a system of equations (more properly, equivalences), relating these variables. We can now present a logical formula which describes the language of the previous section. S where S ::~ C ::~ V T ::-" V l:TA2:CA(Icount-- 2count) A(1 <2) A~b12 (l:cA2:CA(count #---- 2count) A¢1~) (i :CA(count ~ -- end) A ~I) (I :aA2:TA3:bA(count #---- 2count) A (I < 2) A (2 < 3) A¢1~z) (l:aA2:b A (count # : end) A (I < 2) A ~b12), where ¢I~ is the formula (e a 1) A (e a 2), in which e is the path of length 0 referring to the initial node of the f-structure, and where the other ~ formulas are similarly defined. (The ~b formulas give the required dominance information.) In this example, the set L - (1,2, 3, #, count}, the set E - {a,b,c}, and the set A -- {end}. Thus the atomic symbol "end" does not appear as part of any derived string. It is easy to see how the structure in Figure 2 satisfies this formula. The whole structure must satisfy the formula S, which is given recursively. Thus the sub- structure at the end of the 1 arc from the root must satisfy the clause for T, and so forth. It should now be clearer why we consider our logic a logic for functional grammar. Consider the FUG descrip- tion in Figure 3. According to [5, page 149], this descril~tion specifies sentences, verbs, or noun phrases. Let us call such struc- tures "entities", and give a partial translation of this de- scription into our logic. Create the type variables ENT, S, VERB, and NP. Consider the recursive formula ENT where ENT ::= S ::-- S v NP v VERB subj : NP A pred : VERB A(subj < pred) A((seomp : none) V (seomp : S A(pred <scomp))) Notice that the category names can be represented as type variables, and that the categories NP and VERB are free type variables. Given an assignment of a set of f-structures to these type variables, the type ENT will become well-specified. A few other points need to be made concerning this example. First, our formula does not have any ancestor information in it, so the dominance relations implicit in Kay's patterns axe not represented. Second, our word or- der conventions are not the same as Kay's. For example, in the pattern (subj pred...), it is required that the sub- ject be the very first constituent in the sentence, and that nothing intervene between the subject and predicate. To model this we would need to add the "immediately left of" predicate, because our < predicate is transitive, and does not require this property. Next, Kay uses "CAT" arcs to represent category information, and considers "NP" to be an atomic value. It would be possible to do this in our logic as well, and this would perhaps not allow NPs to be unified with VERBs. However, the type variables would still be needed, because they are essential for specifying recursion. Finally, FUG has other devices for special pur- poses. One is the use of nonlocai paths, which are used at inner levels of description to refer to features of the "root node" of a DG. Our logic will not treat these, be- cause in combination with recursion, the description of the semantics is quite complicated. The full version of the paper will have the complete semantics. 9] cat = S pattern = (subj pred...) i:i: } I cat = VERB ] $corrlp -~. none ] pattern = (... scomp) ] • co~p = [ ~at = S ] J cat = N P ] cat = VERB ] Figure 3: Disjunctive specification in FUG. 3.3 The formalism 3.3.1 Syntax We summarize the formal syntax of our logic. We postulate a set A of atomic feature names, a set L of attribute labels, and a set E of terminal symbols (word entries in a lexicon.) The type variables come from a set TVAR = {X0,Xt .... }. The following list gives the syntactical constructions. All but the last four items are atomic formulas. 1. NIL 2. TOP 3. X, in which X E TVAR 4. a, in which a E A 5. o', in which o" E E 6. z<v, in which z and v E L" 7. x c~ V, in which z and V E L" 8. [zt ..... x~], in which each z~ E L= 9./:$ 10. @^g, 11. ~v,~ 12. ~b where [Xt ::= ~bt;... X,~ ::= ~,] Items (1) and (2) are the identically true and false formulas, respectively. Item (8) is the way we officially represent path equations. We could as well have used equations like z = V, where ~ and V E L', but our deft- nition lets us assert the simultaneous equality of a finite number of paths without writing out all the pairwise path equations. Finally, the last item (12) is the way to express recursion. It will be explained in the next subsection. Notice, however, that the keyword where is part of the syntax. 3.3.2 Semantics The semantics is given with a standard Tarski defini- tion based on the inductive structure of wffs. Formulae are satisfied by pairs (.4,p), where ,4 is an oriented f- structure and p is a mapping from type variables to sets off-structures, called an environment. This is needed be- cause free type variables can occur in formulas. Here are the official clauses in the semantics: NIL always; TOP never; x iff.4 e p(X); a iff 7"(q0) = a, where q0 is the initial state 1. (.4, p) 2. (.4,p) 3. (.4,p) 4. (.4, p) of ,4; 5. (A,p) 6. (.4, p) T. (.4,p) 8. (.4, p) ~, where o" E ~-, iff r(q0) = o'; v < w iff 6(q0, v) < 6(qo, w); v a w iff 6(qo, v) a ~(qo, w); [=~ ..... =.] iffVi,j : 6(q0,zl) = ~(qo,xj); 9. (.4,p) ~ l : ~ iff (.4/l,p) ~ ~, where .4/1 is the automaton .4 started at 6(qo, l); 10. (A, p) ~ ~ ^ ~ iff (A, p) ~ ~ and (A, p) ~ ~; 11. (.4,p) ~ ~ V ~b similarly; 12. (.4,p) ~ ~b where [Xt ::= Ot;...X, ::= 0n] iff for some k, (.4, p(~)) ~ ~b, where p(k) is defined inductively as follows: • p(°)(xo = 0; • p(k+~)(Xd = {B I (~,p(~)) [= ,~,}, and where p(k)(X) = p(X) if X # Xi for any i. We need to explain the semantics of recursion. Our semantics has two presentations. The above definition is shorter to state, hut it is not as intuitive as a syntactic, operational definition. In fact, our notation ~b where [Xt ::= ~bl ..... Xn ::- ~bn] 92 is meant to suggest that the Xs can be replaced by the Cs in ¢. Of course, the Cs may contain free occurrences of certain X variables, so we need to do this same replace- ment process in the system of Cs beforehand. It turns out that the replacement process is the same as the pro- cess of carrying out grammatical derivations, but making replacements of nonterminal symbols all at once. With this idea in mind, we can turn to the definition of replacement. Here is another advantage of our logic - replacement is nothing more than substitution of formu- las for type variables. Thus, if a formula 0 has distinct free type variables in the set D = {Xt ..... An}, and Ct,..-, ¢, are formulas, then the notation denotes the simultaneous replacement of any free occur- rences of the Xj in 0 with the formula Cj, taking care to avoid variable clashes in the usual way (ordinarily this will not be a problem.) Now consider the formula ¢ where [Xt ::= Ct;.-.X, ::= ¢,]. The semantics of this can be explained as follows. Let D = {XI ..... X,~}, and for each k _> 0 define a set of formulas {¢~k) [ I _< i _< n}. This is done inductively on k: ~o) = ¢,[X *-- TOP : X E D]; ¢(k+1) .- elk) i = ~'i[X : X e O]. These formulas, which can be calculated iteratively, cor- respond to the derivation process. Next, we consider the formula ¢. In most grammars, ¢ will just be a "distinguished" type variable, say S. If (`4, p) is a pair consisting of an automaton and an envi- ronment, then we define (`4, p) ~ ¢ where [Xt ::= ¢i;...X,t ::= ¢,] iff for some k, (.4, p) ~ ¢[X, ,- elk): X, E D]. Example. Consider the formula (derived from a reg- ular grammar) S where T "'~ (I :aA2 : S) V(I :hA2 :T) Vc (I :bA2 : S) V(I :aA2 : T) Vd. Then, using the above substitutions, and simplifying ac- cording to the laws of Kasper-Rounds, we have ¢(s o) C, ¢~) = d; CH) = (1:aA2:c) V(1:bA2:d)Vc; ¢(~) = (1:bA2:c) V(1:aA2:d)Vd; ¢(2) = I:aA2:(I:aA2:c) V(I:bA2:d)Vc) V l:bA2:((l:bA2:c) V(l:aA2:d)Vd) VC. The f-structures defined by the successive formulas for S correspond in a natural way to the derivation trees of the grammar underlying the example. Next, we need to relate the official semantics to the derivational semantics just explained. This is done with the help of the following lemmas. Lemma 1 (`4,p) ~ ¢~) ~ (`4, p(k)) ~ ¢i. Lemma 2 (`4,p) ~ 0[Xj -- ¢./ : X./ E D] iff(`4,p') O, where p°(Xi) = {B ] (B,p) ~ ¢i}, if Xi E D, and otherwise is p(X). The proofs are omitted. Finally, we must explain the notion of the language defined by ¢, where ¢ is a logical formula. Suppose for simplicity that $ has no free type variables. Then the notion A ~ 0 makes sense, and we say that a string w E L(~b) iff for some subsumpfion.minirnal f-structure ,4, A ~ ¢, and w is compatible with ,4. The notion of subsumption is explained in [8]. Briefly, we have the following definition. Let ,4 and B be two automata. We say ,4 _ B (.4 subsumes B; B extends `4) iff there is a homomorphisrn from `4 to B; that is, a map h : Q.4 -- Qs such that (for all existing transitions) 1. h(6.~(q, l)) = 6B(h(q), l); 2. r(h(q)) = r(q) for all q such that r(q) E A; 3. h(qoa) = qo~. It can be shown that subsurnption is a partial order on isomorphism classes of automata (without orderings), and that for any formula 4} without recursion or ordering, that there are a finite number of subsumption-minimal au- tomata satisfying it. We Consider as candidate structures for the language defined by a formula, only automata which are minimal in this sense. The reason we do this is to exclude f-structures which contain terminal symbols not mentioned in a formula. For example, the formula NIL is satisfied by any f-structure, but only the mini- mal one, the one-node automaton, should be the principal structure defined by this formula. By compatibility we mean the following. In an f- structure `4, restrict the ordering < to the terminal sym- bois of,4. This ordering need not be total; it may in fact be empty. If there is an extension of this partial order on the terminal nodes to a total order such that the labeling 93 symbols agree with the symbols labeling the positions of w, then w is compatible with A. This is our new way of dealing with free word order. Suppose that no precedence relations are specified in a formula. Then, minimal satisfying f-structures will have an empty < relation. This implies that any permutation of the terminal symbols in such a structure will be al- lowed. Many other ways of defining word order can also be expressed in this Logic, which enjoys an advantage over ID/LP rules in this respect. 4 Modeling Relational Grammar Consider the relational analyses in Figures 4 and 5. These analyses, taken from [7], have much in common with functional analyses and also with transsformational ones. The present pair of networks illustrates a kind of raising construction common in the relational literature. In Figure 4, there are arc labels P, I, and 2, representing "predicate", "subject", and "object" relations. The "cl" indicates that this analysis is at the first linguistic stra- tum, roughly like a transformational cycle. In Figure 5, we learn that at the second stratum, the predicate ("be- lieved") is the same as at stratum i, as is the subject. However, the object at level 2 is now "John", and the phrase "John killed the farmer" has become a "chSmeur" for level 2. The relational network is almost itself a feature struc- ture. To make it one, we employ the trick of introducing an arc labeled with l, standing for "previous level". The conditions relating the two levels can easily be stated as path equations, as in Figure 6. The dotted lines in Figure 6 indicate that the nodes they connect are actually identical. We can now indicate precisely other information which might be specified in a relational grammar, such as the ordering information I < P < 2. This would apply to the "top level", which for Perlmutter and Postal would be the "final level", or surface level. A recursive specification would also become possible: thus SENT ::= CLAUSEA(I<P<2) CLAUSE ::= I:NOMAP:VERB A 2 : (CLAUSE V NOM) A (RAISE V PASSIVE V ...) A I : CLAUSE l : 2 : CLAUSE A (equations in (6)) RAISE ::= This is obviously an incomplete grammar, but we think it possible to use this notation to give a complete specifi- cation of an RG and, perhaps at some stage, a computa- tional test. 5 Undecidability In this section we show that the problem of sa(is/ia- bility - given a formula, decide if there is an f-structure satisfying it - is undecidable. We do this by building a for- mula which describes the computations of a given Turing machine. In fact, we show how to speak about the com- putations of an automaton with one stack (a pushdown automaton.) This is done for convenience; although the halting problem for one-stack automata is decidable, it will be clear from the construction that the computation of a two-stack machine could be simulated as well. This model is equivalent to a Turing machine - one stack rep- resents the tape contents to the left of the TM head, and the other, the tape contents to the right. We need not simulate moves which read input, because we imagine the TM started with blank tape. The halting problem for such machines is still undecidable. We make the following conventions about our PDA. Moves are of two kinds: • qi : push b; go to qj ; • qi : pop stack; if a go to qj else go to qk. The machine has a two-character stack alphabet {a, b}. (In the push instruction, of course pushing "a" is allowed.) If the machine attempts to pop an empty stack, it can- not continue. There is one final state qf. The machine halts sucessfully in this and only this state. We reduce the halting problem for this machine to the satisfiability problem for our logic. Atoms: "none ..... bookkeeping marker for telling what is in the stack qO, ql ..... qn--- one for each state Labels: a, b --- for describing stack contents s -- pointer to top of stack next --- value of next state p --- pointer to previous stack configuration Type variables: CONF -- structure represents a machine configuration INIT0 FINAL --confi~trations at start and finish QO ..... QN: property of being in one of these states The simulation proceeds as in the relational grammar example. Each configuration of the stack corresponds to a level in an RG derivation. Initially, the stack is empty. Thus we put 94 Figure 4: Network for The woman believed that John killed the farmer. b ~ p c a. f Figure 5: Network for The woman believed John to have killed the farmer. p = lp 1 = ll 2 = 121 Chop = 12P Cho 2 " 1 2 2 Figure 6: Representing Figure 5 as an f-structure. 95 INIT ::= s : (b : none A a : none) A nerl; : q0. Then we describe standard configurations: C0//F ::= ISIT V (p : CONF A (QO V... V QN)). Next, we show how configurations are updated, de- pending on the move rules. If q£ is push b; go to qj, then we write QI ::=nex~:qjAp:next:qiAs:a:noneAsb=ps. The last clause tells us that the current stack contents, after finding a %" on top, is the same as the previous contents. The %: none" clause guarantees that only a %" is found on the DG representing the stack. The sec- ond clause enforces a consistent state transition from the previous configuration, and the first clause says what the next state should be. If q£ is pop stack; if a go to qj else go to qk, then we write the following. QI ::= p : nex~ : qi A ((s=psaAnex~::qjAp:s:b:none) V(s=psbAnext:qkAp:s:a:none)) For the last configuration, we put I~F ::---- C011F A p : nex~ : qf. We take QF as the "distinguished predicate" of our scheme. It should be clear that this formula, which is a big where-formula, is satisfiable if[" the machine reaches state qf. 6 Conclusion It would be desirable to use the notation provided by our logic to state substantive principles of particu- lax linguistic theories. Consider, for example, Kashket's parser for Warlpiri [4], which is based on GB theory. For languages like Warlpiri, we might be able to say that linear order is only explicitly represented at the mor- phemic level, and not at the phrase level. This would translate into a constraint on the kinds of logical for- mulas we could use to describe such languages: the < relation could only be used as a relation between nodes of the MORPHEME type. Given such a condition on formulas, it migh t then be possible to prove complexity results which were more positive than a general undecid- ability theorem. Similar remarks hold for theories like relational grammar, in which many such constraints have been studied. We hope that logical tools will provide a way to classify these empirically motivated conditions. References [1] Joshi, A. , K. Vijay-Shanker, and D. Weir, The Con- vergence of Mildly Context-Sensitive Grammar For- malisms. To appear in T. Wasow and P. Sells, ed. "The Processing of Linguistic Structure", MIT Press. [2] Kaplan, R. and J. Bresnan, LFG: a Formal Sys- tem for Grammatical Representation, in Bresnan, ed. The Mental Representation of Grammatical Re- lations, MIT Press, Cambridge, 1982, 173-281. [3] Kasper, R. and W. Rounds, A Logical Semantics for Feature Structures, Proceedings of e4th A CL Annual Meeting, June 1986. [4] Kashket, M. Parsing a free word order language: Warlpiri. Proc. 24th Ann. Meeting of ACL, 1986, 60-66. [5] Kay, M. Functional Grammar. In Proceedings of the Fifth Annual Meeting of the Berkeley Linguistics So- ciety, Berkeley Linguistics Society, Berkeley, Califor- nia, February 17-19, 1979. [6] Pereira, F.C.N., and D. Warren, Definite Clause Gram- mars for Language Analysis: A Survey of the Formal- ism and a Comparison with Augmented Transition Networks, Artificial Intelligence 13, (1980), 231-278. [7] Perlmutter, D. M., Relational Grammar, in Syntax and Semantics, voi. 18: Current Approaches to Syn- taz, Academic Press, 1980. [8] Rounds, W. C. and R. Kasper. A Complete Logi- cal Calculus for Record Structures Representing Lin- guistic Information. IEEE Symposium on Logic in Computer Science, June, 1986. [9] Rounds, W., LFP: A Formalism for Linguistic De- scriptions and an Analysis of its Complexity, Com- putational Linguistics, to appear. 96
1987
13
Functional Unification Grammar Revisited Kathleen R. McKeown and Cecile L. Paris Department of Computer Science 450 Computer Science Columbia University New York, N.Y. 10027 [email protected] [email protected] Abstract In this paper, we show that one benefit of FUG, the ability to state global conslralnts on choice separately from syntactic rules, is difficult in generation systems based on augmented context free grammars (e.g., Def'mite Clause Cn'anmm~). They require that such constraints be expressed locally as part of syntactic rules and therefore, duplicated in the grammar. Finally, we discuss a reimplementation of lUg that achieves the similar levels of efficiency as Rubinoff's adaptation of MUMBLE, a detcrministc language generator. 1 Introduction Inefficiency of functional unification grammar (FUG, [5]) has prompted some effort to show that the same benefits offered by FUG can be achieved in other formalisms more efficiently [3; 14; 15; 16]. In this paper, we show that one benefit of FUG, the ability to conciselyl state global constraints on choice in generation, is difficult in other formalhms in which we have written generation systems. In particular, we show that a global constraint can be stated separately from syntactic rules in FUG, while in generation systems based on augmented context free ~g~nunars (e.g., Definite Clause Cn'amma~ (DCG, [13])) such consWaints must be expressed locally as part of syntactic rules and the~=for¢, duplicated in the grammar. Finally, we discuss a reimplementation of lUG in TAILOR [11; 12] that achieves the si.m/l~r leveLs of efficiency as Rubinoff's adaptation [16] of MUMBLE [7], a deterministc language generator. 1.1 Statement of Constraints Language generation can be viewed primarily as a problem of choice, requiring decisions about which syntactic structures best express intent. As a result, much research in language generanon has focused on identi~ing conswaints on choice, and it is important to be able to represent these constraints clearly and efficiently. In this paper, we compare the representation of constraints in FUG with their repn:sentation in a DCG generation system [3]. We are interested in representing functional constraints on syntactic sWacture where syntax does not fully restrict expression; that is, conswaints other than those coming from syntax. We look at the representation of two specific constraints on syntactic choice: focus of attention on the choice of sentence voice and focus of attention on the choice of simple versus complex sentences. We claim that, in a lUG, these constraints can be stated separately from rules dictating syntactic structure, thus leading to simplicity of the granunar since the constraints only need to be stated once. This is possible in FUG because of unification and the ability to build constituent structure in the grammar. In contrast, in a DCG, constraints must be stated as part of the individual grammar rules, resulting in duplication of a constraint for each syntactic rule to which it applies. 1.2 Passive/Active Constraint Focus of attention can determine whether the passive or active voice should be used in a sentence [8]. The constraint dictates that focused information should appear as surface subject in the sentence. In FUG, this can be represented by one pattern indicating that focus should occur f'u'st in the sentence as shown in Figu~ 1. This panern would occur in the sentence category of the grammar, since focus is a sentence constituent. This constraint is represented as part of an alternative so that other syntactic constraints can override it (e.g., if the goal were in focus but the verb could not be pmsivized, ~ constraint would not apply and an active sentence would be generated). The structure of active or passive would be indicated in the verb group as shown in Figure 2.1 The correct choice of active or passive is made through unification of the patterns: active voice is selected if the focus is on the protagonist (focus unifies with pro:) and passive if focus is on the goal or beneficiary Orocus unifies with goal or beheld. This representation has two desirable properties: the constraint can be stated simply and the construction of the resulting choice b expr=ssed separately from the constraint. (alt ( (pattern (focus ...) ) ) ) Figure 1: Constraint on Passive/Active in FUG In the DCG, the unification of argument variables means a single rule can state that focus should occur first in the sentence. However, the rules specifying construction of the passive and active verb phrases must now depend on which role (protagonist, goal, or beneficiary) is in focus. This requires three separate rules, one of which will be chosen depending on which of the three other case roles is the same as the value for focus. The DCG v..presentation thus mixes information from the conswaint, focus of attention, with the passive/active construction, duplicating it over three tThis figure shows only the m'dm, of comtitmmu foe active and passive voice m~l does no¢ include odwr details of the co~au'ucdon. 97 (alt ((voice active) (pattern (prot verb goal))) ((voice passive} (alt ((pattern (goal verb1 verb2 by-pp))) ((pattern (benef verbl verb2 by-pp)}})}) Figure 2: Passive/Active Construction in FUG rules. The sentence rule is shown in Figure 3 and the three other rules are presented in Figure 4. The constituents of the proposition are represented as variables of a clause. In Figure 4, the arguments, in order, are verb (V), protagonist (PR), goal (G), beneficiary (B), and focus. The arguments with the same variable name must be equal. Hence, in the Figure, focus of the clause must be equal to the protagonist (PR). sentence (clause (Verb, Prot, Goal, Benef, Focus} ) ~> nplist (Focus}, verb_phrase (Verb, Prot, Goal, Benef, Focus) . Figure 3: Passive/Active Constraint in DCG 1.3 Focus Shift Constraint This constraint, identified and formalized by Derr and McKeown [3], constrains simple and complex sentence generation. Any generation system that generates texts and not just sentences must determine when to generate a sequence of simple sentences and when to combine simple sentences to form a more complex sentence. Derr and McKcown noted that when a speaker wants to focus on a single concept over a sequence of sentences, additional information may need to be presented about some other concept. In such a case, the speaker will make a temporary digression to the other concept, but will immediately continue to focus on the first. To signal that focus does not shift, the speaker can use subordinate sentence structure when presenting additional information. The focus constraint can be stated formally as follows: assume input of three propositions, PI, P2, and P3 with /* V = Verb; PR = Prot; G ~ Goal; B = Beneficiary; last argument - focus */ • verb_phrase (pred (V, NEG, T, AUX}, PR, G, B, PR) -->verb (V, NEG, T, AUX, N, active), nplist (G), pp (to, B). verb_phrase (pred (V, NEG, T, AUX), PR, G, B, G) -->verb (V, NEG, T, AUX, N, passive), pp (to, B), pp (by, PR). verbphrase (pred (V, NEG, T, AUX), PR, G, B, B) -->verb (V, NEG, T, AUX, N, passive), nplist (G), pp (by, PR). Figure 4: Passive/Active Construction in DCG arguments indicating focus F1, F2, and F3. 2 The constraint states that if F1 = F3, Fl does not equal F2 and F2 is a constituent of PI, the generator should produce a complex sentence consisting of PI, as main sentence with P2 subordinated to it through P2's focus, followed by a second sentence consisting of P3. In FUG, this constraint can be stated in three parts, separately from other syntactic rules that will apply: I. Test that focus remains the same from PI to P3. 2. Test that focus changes from PI to P2 and that the focus of I'2 is some constituent of PI. 3. If focus does shift, form a new constituent, a complex sentence formed from PI and P2, and order it to occur before P3 in the output (order is specified by patterns in FUG). Figure 5 presents the constraint, while Figure 6 shows the construction of the complex sentence from P1 and P2. Unification and paths simplify the representation of the constraint. Paths, indicated by angle brackets (<>), allow the grammar to point to the value of other constituents. Paths and unification are used in conjunction in Part 1 of Figure 5 to state that the value of focus of P1 should unify with the 2In the systems we are describing, input is specified in a case frame formalism, with each pmpositioa indicating protagonist (prot), goal, beneficiary (benef), verb, and focus. In these systems, iexical choice is made before entering the grammar, thus each of these arguments includes the word to be used in the sentence. 98 (alt % Is focus the same in P1 and P3? 1.((PI ((focus <^ P3 focus>))) % Does not apply if focus % stays the same 2. (alt (((PI ((focus <^ P2 focus>)))) ( % Focus shifts; Check that P2 % focus is a constituent of % PI. (alt (((PI ((prot <^ P2 focus>)))) ((PI ((goal <a P2 focus>)))) ((P1 ((benef <^ P2 focus>)))))) % Form new constituent from P1 % and P2 and order before P3. 3. (pattern (PIP2subord P3) ) (P3 (cat s) ) % New constituent is of category % subordinate. (PIPRsubord % Place P2 focus into % subordinate as it will % be head of relative clause. (same <^ P2 focus>) (cat subordinate) ) ) ) ) ) Figure 5: Focus Shift Constraifit in FUG value of focus of P3 (i.e., these two values should be equal). 3 Unification also allows for structure to be built in the grammar and added to the input. In Part 3, a new constituent P1P2subord is built. The full structure will result from unifying P1P2aubord with the category subordinate, in which the syntactic structure is represented. The grammar for this category is shown in Figure 6. It constructs a relative clause 4 from P2 and attaches it to the constituent in P1 to which focus shifts in 1:'2. Figure 7 shows the form of input requixed for this constraint and the output that would be produced. 3A path is used to expect the focus of P3. An atuibute value pair such as (focus <P3 focus>) determines the value for focus by searching for an amibute P3 in the list of am'ibutes (or Functional Description if'D)) in whichfocus occurs. The value of P3'sfocua is then copied in as the value of focus. In order to refer to attributes at any level in the m~e formed by the nestsd set of FDs, the formalism includes an up-arrow (^). For example, given the attribum value pair (attrl <^ am'2 attt3>), the up- arrow indica,,'s that the system should look for attr2 in the FD containing the FD ofattrl. Since P3 occurs in the FD containing PI, an up-arrow is used to specify that the system should look for the attribute P3 in the FD containing PI (i.e., one level up). More up-arrows can be used if the fast attribute in the path occurs in an even higher level FD. 4The entire grammar for relative clauses is not shown. In particular, it would have to add a relative pronoun to the input. ( (cat subordinate) % Will consist of one compound sentence (pattern (s)) (s ((cat s))) % Place contents of P1 in s. (s <^^ PI>) % Add the subordinate as a % relative clause modifying SAME. ( s ^me % Place the new subordinate made from % P2 after head. ((pattern (... head newsubord ...)) % Form new subordinate clause (newsubord % It's a relative clause. (cat s-bar) (head <^ head>) % All other constituents in % newsubord come from P2. (same ( (newsubord <^ ^ P2>) % Unify same with appropriate % constituent of P1 to attach % relative clause (s ((alt (((prot <^ same>)) ( (goal <^ same>)) ( (banef <^ same>) ) ) ) ) ) ) Figure 6: Forming the Subordinate Clause in FUG In the DCG formalism, the constraint is divided between a rule and a test on the rule. The rule dictates focus remain the same from P1 to P3 and that P2's focus be a constituent of P1, while the test states that P2's focus must not equal Pl's. Second, because the DCG is essentially a context free formalism, a duplication of rules for three different cases of the construction is required, depending on whether focus in P2 shifts to protagonist, goal or beneficiary of PI. Figure g shows the three rules needed. Each rule takes as input three clauses (the first three clauses listed) and produces as output a clause (the last listed) that combines P1 and P2. The test for the equality of loci in Pl and P3 is done through PROLOG unification of variables. As in the previous DCG example, arguments with the same variable name must be equal. Hence, in the first rule, focus of the third clause (FI) must be equal to focus of the first clause (also FI). The shift in focus from P1 to P2 is specified as a condition (in curly brackets {}). The condition in the first rule of Figure 8 states that the focus of the second clause (PR l) must not be the same as the focus of the fast clause if:l). Note that the rules shown in Figure 8 represent primarily the constraint (i.e., the equivalent of Figure 5). 99 INPUT: ( (Pl ( (prot ((head girl))) (goal ((head cat))) (verb-group ((verb ..... pet))) (focus <prot>)))) (P2 (prot ((head =ms cat)) (goal ((head ~ mouse)) (verb-group ((verb .ms caught))) (focus <prot>)))) (P3 ((prot ((head ~- girl))) (goal ((head ~m happy))) (verb-group ((verb ~ be))) (focus <prot>))))) OUTPUT - The girl pet the cat that caught the mouse. The girl was happy. Figure 7: Input and Output for FUG The building of structure, dictating how to construct the relative clause from P2 is not shown, although these rules do show where to attach the relative clause. Second, note that the conswaint must be duplicated for each case where focus can shift (i.e., whether it shifts to pint, goal or beneficiary). 1.4 Comparisons With Other Generation System Grammars The DCG's duplication of rules and constraints in the examples given above results because of the mechanisms provided in DCG for representing conswaints. Constraints on consdtuent ordering and structure are usually expressed in the context free portion of the granmmr;, that is, in the left and fight hand sides of rules. Constraints on when the context free rules should apply are usually expressed as tests on the rules. For generation, such constraints include pragmatic constraints on free syntactic choice as well as any context sensitive constraints. When pragmatic constraints apply to more than one ordering constraint on constituents, this necessarily means that the constraints must be duplicated over the rules to which they apply. Since DCG allows for some constraints to be represented through the unification of variables, this can reduce the amount of duplication somewhat. FUG allows pragmatic constraints to be represented as meta-rules which are applied to syntactic rules expressing ordering constraints through the process of unification. This is similar to Chomsky's [2] use of movement and focus rules to transform the output of context free rules in order to avoid rule duplication. It may be possible to factor out constraints and represent them as recta-rules in a DCG, but this would involve a non-standard implementation of the DCG (for example, compilation of the DCG to another grammar formalism which is capable of representing constraints as meta-rules). /* Focus of P2 is protagonist of PI (PR1) Example: the cat was petted by the girl that brought it. the cat purred */ foc_shift (clause (VI, PR1, GI, B1, FI), clause (V2, PR2, G2, B2, PRI) , clause (V3, PR3, G3, B3, F1), clause (Vl, [np (PRI, clause (V2, PR2, G2, B2, PRI) ) ], GI, BI, FI) ) /* Test: focus shifts from P1 to P2 */ (~I \-~ FI} /* Focus of P2 is goal of P1 (GI) Example: the girl pet the cat that caught the mouse, the girl was happy */ foc shift (clause (Vl, PRI, GI, BI, FI), I clause (V2, PR2, G2, B2, GI), clause (V3, PR3, G3, B3, FI) , clause (Vl, PRI, [np (GI, clause (V2, PR2, G2, B2, GI) ) ], ~i,Fl) ) /* Test: focus shifts from P1 to P2 */ {GI \~m FI} /* Focus of P2 is Beneficiary of P1 (BI) Example: the mouse was given to the cat that was hungry, the mouse was not happy */ foc shift (clause (Vl, PRI, G1, B1, FI), ~ause (V2, PR2, G2, B2, BI) , clause (V3, PR3, G3, B3, FI), clause (VI, PRI, GI, [np (B1, clause (V2, PR2, G2, B2, BI) ) ], rl) ) /* Test: focus shifts from P1 to P2 */ (~I V-= rl} Figure 8: Focus Shift Constraint in DCG Other grammar formalisms that express constraints through tests on rules also have the same problem with rule duplication, sometimes even more severely. The use of a simple augmented context free grammar for generation, as implemented for example in a bottom-up parser or an augmented transition network, will require even more duplication of constraints because it is lacking the unification of variables that the DCG includes. For example, in a bottom-up generator implemented for word algebra problem generation by Ment [10], constraints on wording of the problem are expressed as tests on context free rules and natural language output is generated through actions on the rules. Since Ment controls the linguistic difficulty of the generated word algebra problem as well as the algebraic difficulty, his constraints determine when to generate 100 particular syntactic constructions that increase wording difficulty. In the bottom-up generator, one such instructional consuaint must be duplicated over six different syntactic rules, while in FUG it could be expressed as a single constraint. Ment's work points to interesting ways instructional constraints interact as well, further complicating the problem of clearly representing constraints. In systemic grammars, such as NIGEL [6], each choice point in the grmm'nar is represented as a system. The choice made by a single system often determines how choice is made by other systems, and this causes an interdependence among the systems. The grammar of English thus forms a hierarchy of systems where each branch point is a choice. For example, in the part of the grammar devoted to clauses, one of the Rrst branch points in the grammar would determine the voice of the sentence to be generated. Depending on the choice for sentcmce voice, other choices for ovcrali sentence structure would be made. Constraints on choice arc expressed as LISP functions called choosers at each branch point in the grammar. Typically a different chooser is written for each system of the grammar. Choosers invoke functions called inquiry operators to make tests determining choice. Inquiry operators are the primitive functions representing constraints and are not duplicated in the grammar. Calls to inquiry operators from different choosers, however, may be duplicated. Since choosers are associated with individual syntactic choices, duplications of calls is in some ways similar to duplication in augmented context free grammars. On the other hand, since choice is given an explicit representation and is captured in a single type of rule called a system, representation of constraints is made clearer. This is in contrast to a DCG where constraints can be distributed over the grammar, sometimes represented in tests on rules and sometimes represented in the rule itself. The systcmic's grammar use of features and functional categories as opposed to purely syntactic categories is another way in which it, like FUG, avoids duplication of rules. It is unclear from published reports how constraints are represented in MUMBLE [7]. Rubinoff[16] states that constraints are local in MUMBLE, and thus we suspect that they would have to be duplicated, but this can only be verified by inspection of the actual grammar. 2 Improved Efficiency Our implementation of FUG is a reworked version of the tactical component for TEXT [9] and is implemented in PSL on an IBM 4381 as the tactical component for the TAILOR system [11; 12]. TAILOR's FOG took 2 minutes and 10 seconds of real time to process the 57 sentences from the appendix of TEXT examples in [9] (or 117 seconds of CPU time). This is an average of 2.3 seconds real time per sentence, while TEXT's FUG took, in some cases, 5 minutes per sentence. 5 This compares quite favorably with Rubinoff's adaptation [16] of MUMBLE[7] for TEXT's strategic component. Rubinoff's MUMBLE could process all 57 sentences in the appendix of TEXT examples in 5 minutes, yielding an average of 5 seconds per sentence. SWe use real times for our comparisons in ordea to make an analogy with Rubinoff [16], who also used real times. Thus our new implementation results in yet a better speed-up (130 times faster) than Rubinoff's claimed 60 fold speed-up of the TEXT tactical component. Note, however, that Rubinoff's comparison is not at all a fair one. First, Rubinoff's comparisons were done in real times which are dependent on machine loads for time- sharing machines such as the VAX-780, while Symbolics real time is essentially the same as CPU time since it is a single user workstation. Average CPU time per sentence in TEXT is 125 seconds. 6 This makes Rubinoff's system only 25 times faster than TEXT. Second, his system runs on a Symbolics 3600 in Zctalisp, while the original TEXT tactical component ran in Franzlisp on a VAX 780. Using Gabriel's benchmarks [4] for Boyer's theorem proving unification based program, which ran at 166.30 seconds in Franzlisp on a Vax 780 and at 14.92 seconds in Symbolics 3600 Commonl.isp, we see that switching machines alone yields a 11 fold speed-up. This means Rubinoff's system is actually only 2.3 times faslcr than TEXT. Of course, this means our computation of a 130 fold speed-up in the new implementation is also exaggerated since it was computed using real time on a faster machine too. Gabriel's benchmarks arc not available for PSL on the IBM 4381, 7 but we are able to make a fair comparison of the two implementations since we have both the old and new versions of FUG running in PSL on the IBM. Using CPU times, the new version proves to be 3.5 times faster than the old tactical component, e Regardless of the actual amount of spc~-up achieved, our new version of FUG is able to achieve similar speeds to MUMBLE on the same input, despite the fact that FUG uses a non-deterministic algorithm and MUMBLE uses a deterministic approach. Second, regardless of comparisons between systems, an average of 2.3 seconds real time per sentence is quite acceptable for a practical generation system. We were able to achieve the speed-up in our new version of FUG by making relatively simple changes in the unification algorithm. The fast change involved immediately selecting the correct category for unification from the grammar whenever possible. Since the grammar is represented as a llst of possible syntactic categories, the first stage in unification involves selecting the correct category to unify with the input. On fast invoking the unifier, this means selecting the sentence level category and on unifying each constituent of the input with the grammar, this means selecting the category of the constituem. In the old grammar, each category was unified successively until the correct one was found. In the current implementation, we retrieve the correct category immediately and begin ¢'rhis was computed using TEXT's appendix where CPU time is given in units corresponding to 1/60 second. "/Gabriel's benchmarks are available only for much larger IBM, mainfranzs. SThe new version took 117 CPU seconds to process all sentences, or 2 CPU seconds per sentence, while the old version took 410 CPU seconds to process all sentences, or 7 CPU seconds per sentence. 101 unification directly with the correct category. Although unification would fail immediately in the old version, directly retrieving the category saves a number of recursive calls. Unification with the lexicon uses the same technique in the new version. The correct lexicai item is directly retrieved from the grammar for unification, rather than unifying with each entry, in the lexicon successively. Another change involved the generation of only one sentence for a given input. Although the grammar is often capable of generating more than one possible sentence for its input 9, in practice, only one output sentence is desired. In the old version of the unifier, all possible output sentences were generated and one was selected. In the new version, only one successful sentence is actually generated. Finally, other minor changes were made to avoid recursive calls that would result in failure. Our point in enumerating these changes is to show that they arc extremely simple. Considerably more speed-up is likely possible if further implementation were done. In fact, we recently received from ISI a version of the FUG unifier which was completely rewritten from our original code by Jay Myers. It generates about 6 sentences per seconds on the average in Symbolics Commonlisp. Both of these implementations demonstrate that unification for FUG can be done efficiently. 3 Conclusions We have shown how constraints on generation can be represented separately from representation of syntactic structure in FUG. Such an ability is attractive because it means that the constraint can be stated once in the grammar and can be applied to a number of different syntactic rules. In contrast, m augmented context free based generation systems, constraints must be stated locally as part of individual syntactic rules to which they apply. As a result' constraints must be duplicated. Since a main focus in language generation research has been to identify constraints on choice, the ability to represent constraints clearly and efficiently is an important one. Representing constraints separately is only useful for global constraints, of course. Some constraints in language generation are necessarily local and must be represented in FUG as they would in augmented context free based systems: as part of the syntactic structures to which they apply. Furthermore, information for some constraints may be more easily represented outside of the grammar. In such cases, using a function caLl to other components of the system, as is done in NIGEL, is more appropriate. In fact, this ability was implemented as part of a FUG in TELEGRAM [I]. But for global constraints for which information is available in the grammar, FUG has an advantage over other systems. Our reimplementation of FUG has demonstrated that efficiency is not as problematic as was previously believed. Our version of FUG, running in PSL on an IBM 4381, runs 9Often the surface sentences gen~ated are the same, but the syntactic structure built in producing the sentence differs. faster than Rubinoff's version of MUMBLE in Symbolics 3600 Zetalisp for the same set of input sentences. Furthermore, we have shown that we were able to achieve a slightly better speed-up over TEXT's old tactical component than Rubinoff's MUMBLE using a comparison that takes into account different machines. Given that FUG can produce sentences in time comparable to a deterministic generator, efficiency should no longer be an issue when evaluating FUG as a generation system. Acknowledgements The research reported in this paper was partially supported by DARPA grant N00039-84-C-0165, by ONR grant N00014-82-K-0256 and by NSF grant IST-84-51438. We would like to thank Bill Mann for making a portion of NIGEL's grammar available to us for comparisons. References [1] Appelt' D. E. T~T .~GRAM: A Gra.tm'nar Formalism for Language Planning. In Proceedings of the Eigth National Conference on Artificial Intelligence, pages 595 - 9. Karlsruhe, West Germany, August, 1983. [2] Chomsky, N. Essays on Form and Interpretation. North-Holland Publishing Co., Amsterdam, The Netherlands, 1977. [3] Deft, M.A. and McKeown, K. R. Using Focus to Generate Complex and Simple Sentences. In Proceedings of the ]Oth International Conference on Computational Linguistics, pages 501-4. Stanford, Ca., July, 1984. [4] Gabriel, R. P. Performance and Evaluation of Lisp Systems. MIT Press, Cambridge, Mass., 1985. Kay, Martin. Functional Grammar. In Proceedings of the 5th meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, 1979. [6] Mann, W.C. and Matthiessen, C. NIGEL: A Systemic Grammar for Text Generation. Technical Report ISI/RR-85-105, Information Sciences Institute, February, 1983. 4676 Admiralty Way, Marina del Rey, California 90292-6695. [7] McDonald, D. D. Natural Language Production as a Process of Decision Making under Constraint. PhD thesis, MIT, Cambridge, Mass, 1980. McKeown, K. R. Focus Constraints on Language Generation. In Proceedings of the Eight International Conference on Artificial Intelligence. Karlsruhe, Germany, August, 1983. ,. [51. [8] 102 [9] McKeown, K.R. Text Generation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Text. Cambridge University Press, Cambridge, England, 1985. [10] Ment~ J. From Equations to Words. Language Generation and Constraints in the Instruction of Algebra Word Problems. Technical Report, Computer Science Depamnent, Columbia University, New York, New York, 10027, 1987. [11] Paris, C. L. Description Strategies for Naive and Expert Users. In Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics. Chicago, 1985. [12] Paris, C. L. Tailoring Object Descriptions to the User's Level of Expertise. Paper presented at the International Workshop on User Modelling, Maria Laach, West Germany. August, 1986 [13] Pereira, F.C.N. and Warren, D.H.D. Definite Clause Grammars for Language Analysis - A Survey of the Formalism and a Comparison with Augmented Transition Network. Artificial Intelligence :231- 278, 1980. [14] Ritchie, G. The Computational Complexity of Sentence Derivation in Functional Unification Grammar. In Proceedings of COLING '86. Association for Computational Linguistics, Bonn, West Germany, August, 1986. [15] Ritchie, G. Personal Communication. [ 16] Rubinoff, R. Adapting MUMBLE: Experience with Natural Language Generation. In Proceedings of the Fifth Annual Conference on Artificial Intelligence. American Association of Artificial Intelligence, 1986. 103
1987
14
CHARACTERIZING STRUCTURAL DESCRIPTIONS PRODUCED BY VARIOUS. GRAMMATICAL FORMALISMS* K. Vijay-Shanker David J. Weir Aravind K. Joshi Deparunent of Computer and Information Science University of Pennsylvania Philadelphia, Pa 19104 ABSTRACT We consider the structural descriptions produced by vari- ous grammatical formalisms in ~ of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties of their deriva:ion trees. We find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free C-ramma~. On the basis of this observation, we describe a class of formalisms which we call Linear Context- Free Rewriting Systems, and show they are recognizable in poly- nomial time and generate only semilinear languages. 1 Introduction Much of the study of grammatical systems in computational linguistics has been focused on the weak generative capacity of grammatical forma~sm- Little attention, however, has been paid to the structural descriptions that these formalisms can assign to strings, i.e. their strong generative capacity. This aspect of the formalism is beth linguistically and computationally important. For example, Gazdar (1985) discusses the applicability of In- dexed Grammars (IG's) to Natural Language in terms of the structural descriptions assigned; and Berwick (1984) discusses the strong generative capacity of Lexical-Functional Grammar CLFG) and Government and Bindings grammars (GB). The work of Thatcher (1973) and Rounds (1969) define formal systems that generate tree sets that are related to CFG's and IG's. We consider properties of the tree sets generated by CFG's, Tree Adjoining Grammars (TAG's), Head GrammarS (HG's), Categorial Grammars (CG's), and IG's. We examine both the complexity of the paths of trees in the tree sets, and the kinds of dependencies that the formalisms can impose between paths. These two properties of the tree sets are not only linguistically relevant, but also have computational importance. By consider- ing derivation trees, and thus abstracting away from the details of the composition operation and the structures being manipuht_ed, we are able to state the similarities and differences between the "This work was partially supported by NSF grants MCS-82-19116-CER, MC$- $2-07294 and DCR-84-10413, ARO grant DAA 29-84-9-0027, and DARPA grant N00014-85-K001& We are very gateful to Tony Kroch, Michael Palis, Sunii Shende, and Mark $teedman for valuable discussions. formalisms. It is striking that from this point of view many for- malisms can be grouped together as having identically s~'uctm'ed derivation tree sets. This suggests that by generalizing the notion of context-freeness in CFG's, we can define a class of grarnmati- ca] formalisms that manipulate more complex structures. In this paper, we outline how such family of formalisms can be defined, and show that like CFG's, each member possesses a number of desirable linguistic and computational properties: in particular, the constant growth property and polynomial recognizability. 2 Tree Sets of Various Formalisms 2.1 Context-Free Grammars From Thateheds (1973) work, it is obvious that the complexity of the set of paths from root to frontier of trees in a local set (the tree set of a CFG) is regular ~ . We define the path set of a tree 7 as the set of strings that label a path from the root to frontier of 7. The path set of a tree set is the union of the path sets of trees in that tree set. It can be easily shown from Thateher's result that the path set of every local set is a regular set. As a result, CFG's can not provide the structural descriptions in which there are nested dependencies between symbols labelling a path. For example, CFG's cannot produce trees of the form shown in Fig- ure I in which there are nested dependencies between S and NP nodes appearing on the spine of the tree. Gazdar (1985) argues this is the appropriate analysis of unbounded dependencies in the hypothetical Scandinavian language Norwedish. He also argues that paired English complementizers may also require structural descriptions whose path sets have nested dependencies. 2.2 Head Grammars and Generalized CFG's Head Grammars (HG's), introduced by Pollard (1984), is a for- realism that manipulates headed strings: i.e., strings, one of whose symbols is distinguished as the head. Not only is con- catenation of these s~ings possible, but head wrapping can be used to split a string and wrap it around another string. The productions of HG's are very similar to those of CFG's except that the operation used must be made explicit. Thus, the tree sets generated by HG's are similar to those of CFG's, with each node annotated by the operation (concatenation or wrapping) used to combine the headed s~ngs derived by the daughters of IThatcher actually chxacter/zed recognizable set~ for the purposes of this paper we do not distinguish them from local gels. 104 S* A s A NP vP A V S' A Mp s v PP I Figure 1: Nested dependencies in Norwedish that node. A derivation tree giving an analysis of Dutch subor- dinate clauses is given in Figure 2. NP VPRR/ N V S N V $ N V S /.~ I I N V /N I Figure 2: HG analysis of Dutch subordinate clauses HG's are a special case of a class of formalisms called Generalized Context-Free Grammars, also introduced by Pol- lard (1984). A formalism in this class is defined by a finite set of operations (of which concatenation and wrapping are two possibilities). As in the case of HG's the annotated tree sets for these formalisms have the same structure as local sets. 2.3 Tree Adjoining Grammars Tree Adjoining Grzrnmars, a tree rewriting formalism, was intro- duced by Joshi, Levy and Takabashi (1975) and Joshi (1983/85). A TAG consists of a finite set of elementary trees that are ei- ther initial trees or auxg/ary trees. Trees are composed using an operation called adjoining, which is defined as follows. Let be some node labeled X in a tree 3' (see Figure 3). Let 3" be a tree with root and foot labeled by X. When -/' is adjoined at r/ in the tree 3' we obtain a tree 3"". The subtree under ~1 is excised from 3', the tree 3" is inserted in its place and the excised subtree is inserted below the foot of 3". It can be shown that the path set of the tree set generated by a TAG G is a context-free language. TAG's can be used to give Y: S i'. s r'." x /?,,, Figure 3: Adjunction operation the structural descriptions discussed by Gazdar (1985) for the unbounded nested dependencies in Norwedish, for cross serial dependencies in Dutch subordinate clauses, and for the nestings of paired English complementizers. From the definition of TAG's, it follows that the choice of adjunodon is not dependent on the history of the derivation. Like CFG's, the choice is predetermined by a finite number of rules encapsulated in the grammar. Thus, the derivation trees for TAG's have the same structure as local sets. As with HG's derivation structures are annotated; in the case of TAG's, by the trees used for adjunction and addresses of nodes of the elemen- tary tree where adjuoctions occurred. We can define derivation trees inductively on the length of the derivation of a tree 3'. If 3' is an elementary tree, the deriva- tion tree consists of a single node labeled 3'. Suppose 3' results from the adjunction of 3"1,..., 3"k at the k distinct tree addresses nl,..., nk in some elementary tree 3", respectively. The tree denoting this derivation of 3' is rooted with a node labeled 7' having k sublrees for the derivations of 3"z,..., 3'k. The edge from the root to the subtree for the derivation of 3'~ is labeled by the address n~. To show that the derivation tree set of a TAG is a local set, nodes are labeled by pairs consisting of the name of an elementary tree and the address at which it was ad- joined, instead of labelling edges with addresses. The following rule corresponds to the above derivation, where 3'1,..., 3"k are derived from the auxiliary trees ~1 ..... ~k, respectively. (3", n) -- hi)... for all addresses n in some elementary tree at which 7 ~ can be adjoined. If 3" is an initial tree we do not include an address on the left-hand side. 2.4 Indexed Grammars There has been recent interest in the application of Indexed Grammars (IG's) to natural languages. Gazdar (1985) considers a number of linguistic analyses which IG's (but not CFG's) can make, for example, the Norwedish example shown in Figure i. The work of Rounds (1969) shows that the path sets of trees de- rived by IG's (like those of TAG's) are context-free languages. Trees derived by IG's exhibit a property that is not exhibited by the trees sets derived by TAG's or CFG's. Informally, two or more paths can be dependent on each other:, for example, they could be required to be of equal length as in the trees in Figure 4. 105 IG's can generate trees with dependent paths as in Figure 4b. Although the path set for trees in Figure 4a is regular, no CFG $ a a ~ b ~ a /A /B a • b • /A /B . . a a b (a) (b) Figure 4: Example with dependent paths generates such a tree set. We focus on this difference between the U'ee sets of CFG's and IG's, and formaliTe the notion of dependence between paths in a tree set in Section 3. An IG can he viewed as a CFG in which each nonterminal • is associated with a stack. Each production can push or pop symbols on the stack as can he seen in the following productions that generate tree of the form shown in Figure 4b. -. s(n,,) push - share - ,,A(o,) pop B(~a) -- bB(a) pop AO -- BO - b Gazdar (1985) argues that sharing of stacks can be used to give analyses for coordination. Analogous to the sharing of stacks in IG's, Lexical-Functional Grammar's (LFG's) use the unifi- cation of unbounded hierarchical structures. Unification is used in LFG's to produce structures having two dependent spines of unbounded length as in Figure 5. Bresnan, Kaplan, Peters, and Zaenen (1982) argue that these structures are needed to de- scribe erossed-serial dependencies in Dutch subordinate clauses. Gaadar (1985) considers a restriction of lG's in which no more s NF VP Jan NP VP V* I Plet NP VP V V* I I I Mms NP ~ V V' { I V { Figure 5: LFG analysis of Dutch subordinate clauses than one nonterminal on the right-hand-side of a production can inherit the stack from the left-hand-side. Unbounded dependen- cies between branches are not possible in such a system. TAG's can be shown to be equivalent to this restricted system. Thus, TAG's can not give analyses in which dependencies between arbitrarily large branches exist. 2.5 Categorial Grammars Steedman (1986) considers Categorial Grammars in which both the operations of function application and composition may be used, and in which function can specify whether they take their arguments from their right or left. While the generative power of CG's is greater that of CFG's, it appears to be highly con- strained. Hence, their relationship to formalisms such as HG's and TAG's is of interest. On the one hand, the definition of com- position in Steedm~- (1985), which technically permits compo- sition of functions with unbounded number of arguments, gen- erates tree sets with dependent paths such as those shown in Figure 6. This kind of dependency arises from the use of the b 2 Figure 6: Dependent branches from Categorial Grammars composition operation to compose two arbitrarily large cate- gories. This allows an unbounded amount of information about two separate paths (e.g. an encoding of their length) to be com- bined and used to influence the later derivation. A consequence of the ability to generate tree sets with this property is that CG's under this definition can generate the following language which can not be gener~_t_~_ by either TAG's or HG's. {a a 1 a 2 b I b 2 b [ n=nl +-2} On the other hand, no linguistic use is made of this general form of composition and Steedman (personal communication) and Steedman (1986) argues that a more limited definition of composition is more natural. With this restriction the resulting tree sets will have independent paths. The equivalence of CG's with this restriction to TAG's and HG's is, however, still an open problem. 2.6 Multicomponent TAG's An extension of the TAG system was introduced by Joshi et al. (1975) and later redefined by Joshi (1987) in which the adjunc- tion operation is defined on sets of elementary trees rather than single trees. A multicomponent Tree Adjoining Grammar (MC- TAG) consists of a finite set of finite elementary tree sets. We must adjoin all trees in an auxiliary tree set together as a single step in the derivation. The adjuncfion operation with respect to tree sets (multicomponent adjunction) is defined as follows. 106 Each member of a set of trees can be adjoined into distinct nodes of trees in a single elementary tree set, i.e, derivations always involve the adjunction of a derived auxiliary tree set into an elementary tree set. Like CFG's, TAG's, and HG's the derivation tree set of a MCTAG will be a local set. The derivation trees of a MCTAG are similar to those of a TAG. Instead of the names of elementary trees of a TAG, the nodes are labeled by a sequence of names of trees in an elementary tree set. Since trees in a tree set are adjoined together, the addressing scheme uses a sequence of pairings of the address and name of the elementary tree adjoined at that address. The following context-frue production captures the derivation step of the grammar shown in Figure 7, in which the trees in the auxiliary tree set are adjoined into themselves at the root node (address e). The path complexity of the tree set generated by a MCTAG is not necessarily context-free. Like the string languages of MCTAG's, the complexity of the path set increases as the cardinality of the elementary tree sets increases, though hoth the string languages and path sets will always be semilinear. MCTAG's are able to generate tree sets having dependent paths. For example, the MCTAG shown in Figure 7 generates trees of the form shown in Figure 4b. The number of paths that AI J ,/J /, Figure 7: A MCTAG with dependent paths can be dependent is bounded by the grammar (in fact the max- imum cardinality of a tree set determines this bound). Hence, trees shown in Figure 8 can not be generated by any MCTAG (but can be generated by an IG) because the number of pairs of dependent paths grows with n. hcilat, a I A A A A `1 A ,I `1 `1 `1 `1 `1 `1 `1 A I I I I I I I I d II ~I dl 41 • • • heilht. = Figure 8: Trees with unbounded dependencies Since the derivation trees of TAG's, MCTAG's, and HG's are local sets, the choice of the structure used at each point in a derivation in these systems does not depend on the context at that point within the derivation. Thus, as in CFG's, at any point in the derivation, the set of structures that can be applied is determined only by a finite set of rules encapsulated by the grammar. We characterize a class of formalisms that have this property in Section 4. We loosely describe the class of all such systems as Linear Context-Free Rewriting Formalisms. As is described in Section 4, the property of having a derivation tree set that is a local set appears to be useful in showing important properties of the languages generated by the formalisms. The semflineerity of Tree Adjoining Languages (TAL's), MCTAL's, and Head Languages (I-IL's) can be proved using this property, with suitable restrictions on the composition operations. 3 Dependencies between Paths Roughly spe~ki,g, we say that a tree set contains trees with dependent paths if there are two paths p.~ = u~v.~ and q.y = u.lw.1 in each -/ E r' such that u-y is some, possibly empty, shared initial subpath; v.y and w.y are not hounded in length; and there is some "dependence" (such as equal length) between the set of all v.~ and w. r for each ~/ E I'. A tree set may be said to have dependencies between paths if some "appropriate" subset can be shown to have dependent paths as defined above. We attempt to formalize this notion in terms of the tree pumping lemma which can be used to show that a tree set does not have dependent paths. Thatcher (1973) describes a tree pumping lemma for recognizable sets related to the suing pumping ]emma for regular sets. The tree in Figure 9a can be denoted by tlt2t3 where tree substitution is used instead of con- catenation. The tree pumping lemm2 states that if there is tree, t = ht2ts, generated by a CFG G, whose height is more than a predetermined bound k, then all trees of the form tlt2t 3 for each i >_ 0 will also generated by (3 (as shown in Figure 9b). The suing pumping lemma for CFG's (uvuTz!/-theorem) can be seen as a corollary of this lemma. $ x w (=) Co) Figure 9: Tree pumping lemma for local sets The fact that local sets do not have dependent paths follows 107 from this pumping lemma: a single path can be pumped in- dependently. For example, let us consider a tree set containing trees of the form shown in Figure 4a. The tree t~ must be on one of the two branches. Pumping ta will change only one branch and leave the other b~aach unaffected. Hence, the resulting trees wiU no longer have two branches of equal size, We can give a tree pumping lemma for TAG's by adapt- ing the uvwzy-tbeorem for CFL's since the Uee sets of TAG's have independent and context-free paths. This pumping ]emma states that if there is tree, t = tzt2tat4ts, gener=_t_-~_ by a TAG G, such that its height is more than a predetermined bound k, then all trees of the form tst~tot~ts for each i _> 0 will also generated by G. Similarly, for tree sets with independent paths and more complex path sets, tree pumping lemmas can be given. We adapt the string pumping lemmn for the class of languages corresponding to the complexity of the path set. A geometrical progression of language families defined by Weir (1987) involves tree sets with increasingly complex path sets. The independence of paths in the tree sets of the k ta grammatical formalism in this hierarchy can be shown by means of tree pumping lemma of the form i ~ i ~zt~tst 4 ... t2k+Z t~k+Z+S. The path set of ~ sets at level k + 1 have the complexity of the string language of level k. The independence of paths in a tree set appears to be an important property. A formalism generating tree sets with com- plex path sets can still generate only semilinc~r languages ff its tree sets have independent paths, and semilinear path se~ For example, the formalisms in the hierarchy described above generate semflinear languages although their path sets become increasingly more complex as one moves up the hierarchy. From the point of view of recognition, independent paths in the deriva- t/on structures suggests that a top-down parser (for example) can work on each branch independently, which may lead to efficient pa~sing using an algorithm based on the Divide and Conquer technique. 4 Linear Context-Free Rewriting Systems From the discussion so far it is clear that a number of formalisms involve some type of context-free rewriting (they have derivation trees that are local sets). Our goal is to define a class of formal systems, and show that any member of this class will possess certain attractive properties. In the remainder of the paper, we outline how a class of Linear Context-Free Rewriting Systems (LCFRS's) may be defined and sketch how semifinearity and polynomial recognition of these systems follows. 4.1 Definition In defining LCFRS's, we hope to generalize the definition of CFG's to formalisms manipulating any structure, e.g. strings, trees, or graphs. To be a member of LCI~S a formalism must satisfy two restrictions. First, any grammar must involve a fi- nite number of elementary structures, composed using a finite number of composition operations. These operations, as we see below, are restricted to be size preserving (as in the case of concatenation in CFG) which implies that they will be linear and non-erasing. A second res~iction on the forma~ms is that choices during the derivation are independent of the context in the derivation. As will be obvious later, their derivation tree sets will be local sets as are those of CFG's. Each derivation of a grammm" can be represented by a gener- alized context-free derivation tree. These derivation trees show how the composition operations were used to derive the final structures from elementary structm'es. Nodes are annotated by the name of the composition operation used at that step in the derivation. As in the case of the derivation trees of CFG's, nodes are labeled by a member of some finite set of symbols (perhaps only implicit in the grnrnmm" as in TAG's) used to de- note derived structures. Frontier nodes are annotated by zero arity functions con'esponding to elementary su'uctures. Each treelet (an internal node with all its children) represents the use of a rule that is encapsulated by the g~a,-,,~. The grammar encapsulates (either explicitly or implicitly) a finite number of rules that can be written as follows: A -.-,/,,(A~ .... , A.) n > 0 In the case of CFG's, for each production p = A -* utA1 • .. unAnun+I (where ui is a string of terminals) the function fp is defined as follows. In the case of TAG's, a derivation step in which the derived uees ~z,..., ~- are adjoined into ~ at the addresses is,..., i,, would involve the use of the following rule 2. -.4,,,,, ..... ,.(Bs ..... ~.) The composition operations in the case of CFG's are parame- terized by the productions. In TAG's the elementary ~ee and addresses where adjunction takes place are used to instantiate the operation. To show that the derivation trees of any grammar in LCFRS is a /oca/ set, we can rewrite the annotated derivation trees such that every node is labelled by a pair to include the com- position operations. These systems are similar to those de- scribed by Pollard (1984) as Generalized Context-Free Gram- mars (GCFG's). Unlike GCF*G'S, however, the composition operations of LCFRS's are restricted to be linear (do not du- plicate unboundedly large s~mcmres) and nonerasing (do not erase unbounded structures, a restriction made in most modern transformational grammars). These two resWictions impose the constraint that the remit of composing any two s~ucmres should be a sa-ucture whose "size" is the sum of its constituents plus some constant For example, the operation fp discussed in the case of CF'G's (in Section 4.1) adds the constant equal to the sum of the length of the strings us,..., u,+z. Since we are considering formalisms with arbitrary struc- tures it is difficult to precisely specify all of the restrictions on the composition operations that we believe would appropri- ately generalize the concatenation operation for the particular 2 We denote • tree derived from the elemeatany Wee -f by the symbol '~. 108 structures used by the formalism. In considering recognition of LCFRS's, we make further assumption concerning the contri- butinn of each structure to the input suing, and how the com- position operations combine structores in this respect. We can show that languages generated by LCFRS's are semilinear as long as the composition operation does not remove any terminal symbols from its arguments. 4.2 Semilinearity of LCFRL's Semillnearity and the closely related constant growth property (a consequence of semilinearity) have been discussed in the con- text of grammars for naUtral languages by Joshi (1983185) and Berwick and Weinberg (1984). Roughly speaking, a language, L, has the property of semillnearity if the number of occurrences of each symbol in any suing is a linear combination of the oc- currences of these symbols in some fixed finite set of strings. Thus, the length of any suing in L is a linear combination of the length of swings in some fixed finite subset of L, and thus L is said to have the constant growth property. Although this prop- erty is not structural, it depends on the structural property that sentences can be built from a finite set of clauses of bounded structure as noted by Joshi (1983/85). The property of semilinearity is concerned only with the occurrence of symbols in strings and not their order. Thus, any language that is letter equivalent to a semilinear language is also semilinear. Two strings are letter equivalent if they contain equal number of occurrences of each terminal symbol, and two languages are letXer equivalent if every string in one language is letter equivalent to a string in the other language and vice-versa. Since every CFL is known to be semillnear (Parikh, 1966), in order to show semilinearity of some language, we need only show the existence of a leUer equivalent CFL. Our definition of LCFRS's insists that the composition op- erations are linear and nonerasing. Hence, the terminal sym- bols appearing in the structures that are composed are not lost (though a constant number of new symbols may be inUxaluced). If ~P(A) gives the number of occurrences of each terminal in the structure named by A, then, given the constraints imposed on the formalism, for each rule A --* fp(A1 ..... An) we have the equality ¢(A) = ¢(A~) +... + ¢(A.) + cp where cp is some constant. We can obtain a letter equivalent CFL defined by a CFG in which the for each rule as above, we have the production A -* A1 ... A,up where ~P(up) = cp. Thus, the language generated by a grammar of a LCFRS is semilinear. 4.3 Recognition of LCFRL's We now turn our attention to the recognition of suing languages generated by these formalisms (LCFRL's). As suggested at the end of Section 3, the restrictions that have been specified in the definition of LCFRS's suggest that they can be efficiently recognized. In this section for the purposes of showing that polynomial time recognition is possible, we make the additional restriction that the contribution of a derived structure to the in- put string can be specified by a bounded sequence of substrings of the input. Since each composition operation is linear and nonerasing, a bounded sequences of substrings associated with the resulting structure is obtained by combining the substrings in each of its arguments using only the concatenation operation, in- cluding each substring exactly once. CFG's, TAG's, MCTAG's and HG's are all members of this class since they satisfy these restrictions. Giving a recognition algorithm for LCFRL's involves de- scribing the subs~ings of the input that are spanned by the structures derived by the LCFRS's and how the composition operation combines these substrings. For example, in TAG's a derived auxiliary tree spans two substrings (to the left and right of the foot node), and the adjunction operation inserts an- other substring (spanned by the subtree under the node where adjunction takes place) between them (see Figure 3). We can represent any derived tree of a TAG by the two subsc~ngs that appear in its frontier, and then define how the adjunction opera- t/on concatenates the substrings. Similarly, for all the LCFRS's, discussed in Section 2, we can define the relationship between a structure and the sequence of suhstrings it spans, and the effect of the composition operations on sequences of subsU'ings. A derived structure will be mapped onto a sequence zl .... , zt of subsU'ings (not necessarily contiguous in the in- puO, and the composition operations will be mapped onto func- tions that can defined as follows s . f((=, ..... =.,), (y, ..... y.~)) = (~, ..... ,,,,,) where each zl is the concatenation of strings from zj's and y~'s. The linear and nonerasing assumptions about the operations dis- cussed in Section 4.1 require that each zj and Yk is used exactly once to define the swings zl,..., z,~ 3. Some of the operations will be constant functions, corresponding to elementary s~uc- rares, and will be written as f0 ---- (zl,...z~), where each z~ is a constant, the string of terminal symbols a1,~ ... an~,~. This representation of strncV.tres by substrings and the com- position operation by its effect on subswings is related to the work of Rounds (1985). Although embedding this version of LCFRS's in the framework of ILFP developed by Rounds (1985) is straightforward, our motivation was to capture properties shared by a family of grammatical systems and generalize them defining a class of related formafisms. This class of formalisms have the properties that their derivation trees are local sets, and manipulate objects, using a finite number of composition oper- ations that use a finite number of symbols. With the additional assumptions, inspired by Rounds (1985), we can show that mem- bers of this class can be recognized in polynomial time. 4.3.1 Alternating Turing Machines We use Alternating Turing Machines (Chandra, Kozen, and Stockmeyer, 1981) to show that polynomial time recognition is possible for the languages discussed in Section 4.3. An ATM has two types of states, existential and universal. In an existen- tial state an ATM behaves like a nondeterminlstic TM, accepting 3 In order to simplify the following discussion, we assume that each composition operation is binary. It is easy to generalize to the case of n-ary operations. 109 if one of the applicable moves leads to acceptance; in an uni- versal state the ATM accepts if all the applicable moves lead to acceptance. An ATM may be thought of as spawning indepen- dent processes for each applicable move. A k-tape ATM, M, has a read-only input tape and k read-write work tapes. A $~p of an ATM consists of reading a symbol from each tape and optionally moving each head to the left or right one tape ceiL A configuration of M consists of a state of the finite control, the nonblank contents of the input tape and k work tapes, and the position of each head. The space of a configuration is the sum of the lengths of the nonblank tape contents of the k work tapes. M works in space 5(n) if for every string that M ac- cepts no configuration exceeds space S(n). It has been shown in (Chandra et al., 1981) that if M works in space logn then there is a deterministic TM which accepts the same language in polynomial time. In the next section, we show how an ATM can accept the slrings generated by a grammar in a LCFRS for- realism in logspace, and hence show that each fatally can be recognized in polynomial time. 4.3.2 Recognition by ATM We define an ATM, M, reCOgni~ng a language gener~t~ by a grammar, G, having the properties discussed in Section 4.3. It can be seen that M performs a top-down recognition of the input ax ... a,~ in logspace. The rewrite rules and the definition of the composition op- erations may be stored in the finite state control since G uses a finite number of them. Suppose M has to determine whether the k substrings zx,..., zk can be derived from some symbol A. Since each zi is a contiguous substrin 8 of the input (say a~x ... a~2), and no two substrings overlap, we can represent zi by the pair of intoge~'s (ix, i2). We assume that M is in an ex- istential state qA, with integers ix and i2 representing z~ in the (2i - 1) th and 2i *h work tape, for 1 _< i _< k. For each rule p : A --, fp(B, C) such that fp is mapped onto the function fp defined by the following rule. M' breaks zx,...,zk into substrings zl,...,Zn~ and Yx ..... Y,,2 conforming to the definition of fp. M spawns as many processes as there are ways of breaklng up zx, .... zk and rules with A on their left-hand-side. Each spawned process must check if zx,..., zn: and yx,..., Yn2 can be derived from B and C, respectively. To do this, the z's and y's are stored in the next 2nx + 2n2 tapes, and M goes to a universal state. Two processes are spawned requiring B to derive zx,...,znl and C to derive ~./x ,..., Yn2. Thus, for example, one successor process will be have M to be in the existential state qs with the indices encoding zx, .... zn~ in the firat 2nl tapes. For rules p : A -, fp0 such that fp is constant func- tion, giving an elementary structure, fp is defined such that fp0 ---- (zx ... zk) where each z is a constant string. M must enter a universal state and check that each of the k constant substrings are in the appropriate place (as determined by the contents of the first 2k work tapes) on the input tape. In addi- tion to the tapes required to store the indices, M requires one work tape for splitting the substrings. Thus, the ATM has no more than 6k m'x -4- I work tapes, where k m'x is the maximum number of substrings spanned by a derived structure. Since the work tapes store integers (which can be written in binary) that never exceed the size of the input, no configuration has space ex- ceeding O(log n). Thus, M works in logspace and recognition can be done on a deterministic TM in polynomial tape. 5 Discussion We have studied the structural descriptions (trce sets) that can be assigned by various gr-mr-at;cal systems, and classified these formalisms on the basis of two fentures: path complexity; and path independence. We contrasted formalisms such as CFG's, HG's, TAG's and MCTAG's, with formalisms such as IG's and unificational systems such as LFG's and FUG's. We address the question of whether or not a formalism can generate only slructural descriptions with independent paths. This property reflects an important aspect of the underlying lin- guistic theory associated with the formalism. In a grammar which generates independent paths the derivations of sibling constituents can not share an unbounded amount of information. The importance of this property becomes clear in contrasting the- ories underlying GPSG (Gazdar, Klein, Pullum, and Sag, 1985), and GB (as described by Berwick, 1984) with those underly- ing LFG and FUG. It is interesting to note, however, that the ability to produce a bounded number of dependent paths (where two dependent paths can share an unbounded amount of infor- mation) does not require machinery as powerful as that used in LFG, FUG and IG's. As illustrated by MCTAG's, it is possible for a formalism to give tree sets with bounded dependent paths while still sharing the constrained rewriting properties of CFG's, HG's, and TAG's. In order to observe the similarity between these constrained systems, it is crucial to abstract away from the details of the strucUwes and operations used by the system. The similarities become apparent when they are studied at the level of deriva- tion structures: derivation tree sets of CFG's, HG's, TAG's, and MCTAG's are all local sets. Independence of paths at this level reflects context freeness of rewriting and suggests why they can be recognized efficiently. As suggested in Section 4.3.2, a derivation with independent paths can be divided into subcom- putatious with limited sharing of information. We outlined the definition of a family of constrained gram- matical formalisms, called Linear Context-Free Rewriting Sys- tems. This family represents an attempt to generalize the prop- erties shared by CFG's, HG's, TAG's, and MCTAG's. Like HG's, TAG's, and MCTAG's, members of LCFRS can manipu- late structures mere complex than terminal strings and use com- position operations that are more complex that concatenation. We place certain restrictions on the composition operations of LCFRS's, restrictions that are shared by the composition opera- tions of the constrained grammatical systems that we have con- sidered. The operations must be linear and nonerasing, i.e., they can not duplicate or erase structure from their arguments. Notice that even though IG's and LFG's involve CFG-like productions, 110 they are (linguistically) fundamentally different from CFG's be- cause the composition operations need not be linear. By sharing stacks (in IG's) or by using nonlinear equations over f-structares (in FUG's and LFG's), structures with unbounded dependencies between paths can be generat_~i_. LCFRS's share several proper- ties possessed by the class of m//d/y context-sensitive formalisms discussed by Joshi (1983/85). The results described in this paper suggest a characterization of mild context-sensitivity in terms of generalized context-freeness. Having defined LCFRS's, in Section 4.2 we established the sem/1/nearity (and hence constant growth property) of the lan- guages generated. In considering the recognition of these lan- guages, we were forced to be more specific regarding the re- lationship between the structures derived by these formalisms and the substrings they span. We insisted that each slzucture dominates a bounded number of (not necessarily adjacent) sub- strings. The composition operations are mapped onto operations that use concatenation to define the substrings spanned by the resulting strucntres. We showed that any system defined in this way can be recocniTed in polynomial time. Members of LCFRS whose operations have this property can be translated into the ILFP notation (Rounds, 1985). However, in order to capture the properties of various grammatical systems under consideration, our notation is more restrictive that ILFP, which was designed as a general logical notation to characterize the complete class of languages that are recognizable in polynomial time. It is known that CFG's, HG's, and TAG's can be recognized in polynomial time since polynomial time algorithms exist in for each of these formalisms. A corollary of the result of Section 4.3 is that poly- nomial time recognition of MCTAG's is possible. As discussed in Section 3, independent paths in tree sets, rather than the path complexity, may be crucial in characteriz- ing semilinearity and polynomial time recognition. We would like to relax somewhat the constraint on the path complexity of formalisms in LCFRS. Formalisms such as the restricted in- dexed grammars (Gazdar, 1985) and members of the hierarchy of grammatical systems given by Weir (1987) have independent paths, but more complex path sets. Since these path sets are semillnear, the property of independent paths in their tree sets is sufficient to cause semilinearity of the languages generated by them. In addition, the restricted version of CG's (discussed in Section 6) generates Use sets with independent paths and we hope that it can be included in a more general definition of LCFRS's containing formalisms whose tree sets have path sets that are themselves LCFRL's (as in the case of the restricted indexed grammars, and the hierarchy defined by Weir). LCFRS's have only been loosely defined in this paper; we have yet to provide a complete set of formal properties associ- ated with members of this class. In thi s paper, our goal has been to use the notion of LCFRS's to classify grammatical systems on the basis of their strong generative capacity. In considering this aspect of a formalism, we hope to better understand the re- lationship between the structural descriptions generated by the grammars of a formalism, and the properties of semilinearity and polynomial recognizability. References Berwick, R., 1984. Strong generative capacity, weak generative capac- ity, and modern linguistic theories. Comput. Ling. 10:189-202. Betwick, R. and Weinberg, A., 1984. The Grammatical Basis of Lin. guistic Performance. MIT Press, Cambridge, MA. Breanan, J. W.; Kaplan, R. M.; Peters, P. S.; and Zaenen, A., 1982. Cross-serial Dependencies in Dutch. Ling. Inqu/ry 13:613-635. Chandn, A. K.; Kozen, D. C.; and Stockmeyer, L J., 1981. Alternation. J. ACM 28:114-122. Gazdar, G., 1985. Applicability of Indexed Grammars to Natural Lan. guages. Technical Report CSLI-85-34, Center for Study of Language and Information. Gazdar, G.; Klein, E.; Pullum, G. K.; and Sag, I. A., 1985. General- ized Phrase Structure Grammars. Blackwell Publishing, Oxford. Also published by Harvard University Press, Cambridge, MA. Joshi, A. K., 1985. How Much Context-Sensitivity is Necessary for Characterizing Structural Descriptions -- Tree Adjoining Grammars. In Dowry, D.; Karuunan, L; and Zwicky, A. (editors), Natural Language Proceauing ~ Theoretical, Computational and Psychological Perspec- tive. Cambridge University Press, Hew York, NY. Originally presented in 1983. Joshi, A. K., 1987. An Introduction to Tree Adjoining Grammars. In Manaater-gamer, A. (editor), Mathematics of Language. John Ben- jamins, Amsterdam. Jeshi, A. K.: Levy, L. S.; and Takahashi, M., 1975. Tree Adjunct Grammars. J. Comput. Syst. Sci. 10(I). Perikh, IL, 1966. On Context Free Languages. J. ACM 13:570--581. Pollard, C., 1984. Generalized Phrase Structure Grammars, Head Grammars and Natural Language. PhD thesis, Stanford University. Rounds, W. C. LFP: A Logic for Linguistic Descriptions and an Anal- ysis of its Complexity. To appear in Comput. Ling. Rounds, W. C., 1969. Context-free Grammars on Trees. In IEEE ]Oth Annual Symposium on Switching and Automata Theory. Steedman, M. J., 1985. Dependency and Coordination in the Grammar of Dutch and English. Language 61".523-568. Steedman, M., 1986. Combinntory Grammars and Parasitic Gaps. Nat- ural Language and Linguistic Theory (to appear). Thatcher, J. W., 1973. Tree Automata: An informal survey. In Aho, A. V. (editor), Currents in the Theory of Computing, pages 143-172. Prentice Hall Inc., Englewood Cliffs, NJ. Weir, D. J., 1987. Context-Free Grammars to Tree Adjoining Gran- mmars and Beyond. Technical Report, Department of Computer and Information Science, University of Pennsylvania, Philadelphia. 111
1987
15
ON THE SUCCINCTNESS PROPERTIES OF UNORDERED CONTEXT-FREE GRAMMARS M. Drew Moshier and William C. Rounds Electrical Engineering and Computer Science Department University of Michigan Ann Arbor, Michigan 48109 1 Abstract We prove in this paper that unordered, or ID/LP grammars, are e.xponentially more succinct than context- free grammars, by exhibiting a sequence (L,~) of finite languages such that the size of any CFG for L,~ must grow exponentially in n, but which can be described by polynomial-size ID/LP grammars. The results have im- plications for the description of free word order languages. 2 Introduction Context free grammars in immediate dominance and linear precedence format were used in GPSG [3] as a skele- ton for metarule generation and feature checking. It is in- tuitively obvious that grammars in this form can describe languages which are closed under the operation of taking arbitrary permutations of strings in the language. (Such languages will be called symmetric.) Ordinary context- free grammars, on the other hand, "seem to require that all permutations of right-hand sides of productions be ex- plicitly listed, in order to describe certain symmetric lan- guages. For an explicit example, consider the n-letter al- phabet E,~ = {al ..... a,~}. Let P,~ be the set of all strings which are permutations of exactly these letters. It seems obvious that no context-free grammar could generate this language without explicitly listing it. Now try to prove that this is the case. This is in essence what we do in this paper. We also hope to get the audience for the paper interested in why the proof works! To give some idea of the difficulty of our problem, we begin by recounting Barton's results [1] in this confer- ence in 1985. (There is a general discussion in [2].) He showed that the universal recognition problem (URP) for ID/LP grammars is NP-complete. 1 This means that if P :~ NP, then no polynomial algorithm can solve this problem. The difficulty of the problem seems to arise from the fact that the translation from an ID/LP gram- mar to a weakly equivalent CFG blows up exponentially. It is easy to show, assuming P ~ NP, that any reason- able transformation from ID/LP grammars to equivalent CFGs cannot be done in polynomial time; Rounds has done this as a remark in [8]. In this paper, we remove the hypothesis P ~: NP. That is, we can show that no algorithm whatever can effect the translation polynomi- The universal recognition problem is to tell for an ID/LP gram- mar G and a string w, whether or not w E L(G). ally in all cases. (Unfortunately, this does not solve the P - NP question!) Barton's reduction took a known NP-complete prob- lem, the vertex-cover problem, and reduced it to the URP for ID/LP. The reduction makes crucial use of grammars whose production size can be arbitrarily large. Define the fan-out of a grammar to be the largest total number of symbol occurrences on the right hand side of any produc- tion. For a CFG, this would be the maximum length of any RHS; for an ID/LP grammar, we would count sym- bols and their multiplicities. Barton's reduction does the following. For each instance of the vertex cover problem, of size n, he constructs a string w and an ID/LP grammar of fanout proportional to n such that the instance has a vertex cover if and only if the string is generated by the grammar. He also notes that if all ID/LP grammars have fanout bounded by a fixed constant, then the URP can be solved in polynomial time. This brings us to the statement of our results. Let Pn be the language described above. Clearly this language can be generated by the ID/LP grammar S -- al,...,an whose size in bits is O(n log n). Theorem 1 There is a constant c > I such that any contezt-free gr.mmar Gn generating Pn must have size ~(cn). 2 Moreover, every [D/LP grammar'generating pn, whose fanout is bounded by a fized constant, must likewise have ezponential size. The theorem does not actually depend on having a vocabulary which grows with n. It is possible to code everything homomorphically into a two-letter alphabet. However, we think that the result shows that ordinary CFGs, and bounded-fanout ID/LP grammars, are inade- quate for giving succinct descriptions of languages whose vocabulary is open, and whose word order can be very free. Thus, we prefer the statement of the result as it is. We start the paper with the technical results, in Sec- tion 3, and continue with a discussion of the implications for linguistics in Section 4. The final section contains a proof of the Interchange Lemma of Ogden, Ross, and Winklmann [7], which is the main tool used for our re- suits. This proof is included, not because it is new, but because we want to show a beautiful example of the use of 2This notation meam.s that for inKnitely ram W n, the size of Gn must be bigger than c n. 112 combinatorial principles in formal linguistics, and because we think the proof may be generalized to other classes of grammars. 3 Technical Results As we have said, our basic tool is the Interchange Lemma, which was first used to show that the "embedded reduplication" language { wzzy I w, z, and y E {a, b, c}" } is not context-free. It was also used in Kac, Manaster- Ramer, and Rounds [6] to show that English is not CF, and by Rounds, Manaster-Ramer, and Friedman to show that reduplication even over length n strings requires context-free grammar size exponential in n. The cur- rent application uses the last-mentioned technique, but the argument is more complicated. We will discuss the Interchange Lemma informally, then state it formally. We will then show how to apply it in our case. The IL relies on the following basic observation. Sup- pose we have a context-free language, and two strings in that language, each of which has a substring which is the yield of a subtree labeled by the same nonterminal sym- bol at the respective roots of the subtrees. Then these substrings can be interchanged, and the resulting strings will still be in the language. This is what distinguishes the IL from the Pumping Lemma, which finds repeated nonterminals in the derivation tree of just one string. The next observation about the IL is that it attempts to find these interchangeable strings among the length n strings of the given language. Moreover, we want to find a whole set of such strings, such that in the set, the inter- changed substrings all have the same length, and all start at the same position in the host string. The lemma lets us select a number m less than n, and tells us that the length k of the interchangeable substrings is between role and m, where r is the fanout of the grammar. Finally, the lemma gives us an estimate of the size of the interchange- able subset. We may choose an arbitrary subset Q(n) of L(n), where L(n) is the set of length n strings in the lan- guage L. If we also choose an integer m < n, then the IL tells us that there is an interchangeable set A C_ Q(n) such that IAI _> IQ(n)I/(INI" n=), where the vertical bars denote cardinality, and N is the set of nonterminals of the given grammar. (The interchanged strings do not stay in Q(n), but they do stay in L(n). ) Notice that if Q(n) is exponential in size, then A will be also. Thus, if a language has exponentially many strings of length n then it will have an interchangeable subset of roughly the same exponential size, provided the set of nontermi- nals of the grammar is small. Our proof turns this idea around. We show that any CF description of the permu- tation language L(n) must have an exponentially large set of nonterminals, because an interchangeable subset of this language cannot be of the same exponential order as n!, which is the size of L(n). Now we can give a more formal statement of the lem/'fla. Definition. Suppose that A is a subset {zl ..... -p} of L(n). A has the k-interchangeability property iff there are substrings Zh ..., z v of zl, ..., z v respectively, such that each z, has length k, each z~ occurs in the same relative position in each zi, and such that if z~ = wiziy( and z i = wjziV j for any i and j, then wi~jVl is an element of L(n). Interchange Lemma. Let G be a CFG or ID/LP grammar with fanout r, and with nonterminal alphabet N. Let m and n be any positive natural numbers with r < m_< n. Let L(n) be the set of length nstringsin L(G), and Q(n) be a subset of L(n). Then we can find a k-interchangeable subset A of Q(n), such that m/r <_ k _< m, and such that Ial >_ IQ(n)ll (INI" n2). Now we can prove our main theorem. First we show that no CFG of fanout 2 can generate L(n) without an exponential number of nonterminals. The theorem for any CFG then follows, because any CFG can be trans- formed, into a CFG with fanout 2 by a process essentially like that of transforming into Chornsky normal form, but without having to eliminate e-productions or unit produc- tions. This process at most cubes the grammar size, and the result follows because the cube root of an exponen- tial is still an exponential. The proof for bounded-fanout ID/LP is a direct adaptation of the proof for fanout 2, which we now give. Let Pn be the permutation language above, and let G be a fanout 2 grammar for this language. Apply the Interchange Lemma to G, choosing Q(n) = P~, r = 2, and m = n/2. (n will be chosen as a multiple of 4.) Observe that IQ(n)l = IL(n)[ = n!. From the IL, we get a k-interchangeable subset A of L(n), such that n/4 < k < n/2, and such that n! IAI _> INI" n'-" Next we use the fact that A is k-interchangeable to get an upper bound on its cardinality. Let wtztyt and w~.=~.y~. be members of A, and let E(z) be the set of alphabet characters appearing in z. We claim that E(zl) = ~(z~_). For if, say =t has a character not occurring in z~., then the interchanged string wtz2yl will have two occurrences of that character, and thus not be in L(n), as required by the IL. Without loss of generality, ,.V.(z) = {al ..... ak}. The number of strings in A is thus less than or equal to the number of ways of selecting the z string - that is, k!, times the number of ways of choosing the characters in the rest of the string - that is, (n - k)!. In other words, IAI < k! (n - k)!. Putting the two inequalities together and solving for IN[, 113 we get INI > k! (n - k)! " n "W = n -~" " From Pascal's triangle in high school mathematics, (i) in- creases with k until k - n/2. Thus since n/4 < k < n/2, we have (i) > (n~4), which by using Stirling's approxi- mation m! ".., mm e-m~/27rm to estimate the various factorials, grows exponentially with n. Therefore, so does IN[, and our theorem is proved. To obtain the result for a two-letter alphabet, con- sider the homomorphism sending the letter aj into 0 j 1. Let Ii'n be the image of Pn under this mapping. Then, because the mapping is one-to-one, P. is the inverse ho- momorphic image of Kn. If for every c > 1 there is a sequence of CFGs Gn generating K, such that the size of G,~ is not ft(c"), then the same is true for the language Pn, contradicting Theorem I. The reason is that the size of a grammar for the inverse homomorphic image of a language need only be polynomiaUy bigger than the size of a grammar for the language itself. The proof of this claim rests on inspection of one of the standard proofs, say Hopcroft and Ullman [5]. The result is proved us- ing pushdown automata, but all conversions from pdas to grammars require only polynomial increase in size. Our final technical result concerns an n-symbol ana- logue of the so-called MIX language, which has been con- jectured by Marsh not to be an indexed language (see [4] for discussion.) We define the language M, to be the set of all strings over En which have identical numbers of occurrences of each character al in En. Observe that /I,I,~ is infinite for each n. However, there is a sequence of finite sublanguages of the various Mn, such that this se- quence requires exponentially increasing context-free de- scriptions. ~Ve have the following theorem. Theorem 2 Consider the set Mn(n=) of all length n 2 strings of Mn. Then there is a constant c > 1 such that any context.free grammar Gn generating Mn(n 2) must have sue f~(cn). Proof. This proof is really just a generalization of the proof of Theorem 1. It uses, however, the Q subsets in a way that the proof of Theorem 1 does not. First, we drop the n subscript in Mn(n2). Observe next that in every string in M(n2), each character in En occurs exactly n times. Let O(n 2) = {u '~ : lul - n} be the subset of M(n 2) where, as indicated, each string is composed of n identical substrings concatenated in or- der. Then each u substring must be a permutation of E,, i.e., a member of P,. Let Gn be a fanout 2 gram- mar generating M(n2). As in the proof of Theorem I, apply the Interchange Lemma to G,~, choosing ~(n 2) as above, r - 2, and m -- n/2. Observe that we still have IQ(n2)l - n!. From the IL, we get a k-interchangeable subset A of Q(n2), such that n/4 < k < n/2, and such that n! IAI _> I/Vl. n4 Once again we use the fact that A is k-interchangeable to get an upper bound on its cardinaiity. Let wlztyl and w2z2y2 be members of .4, and let E(z) be the set of alphabet characters appearing in z. We claim once again that E(zt) - Z(z2). To see this, notice that the z portions of the strings in A can overlap at most one of the boundaries between the successive u strings, because ]u] -- n and [z[ <_ n/2. If it does not overlap a bound- ary, then the reasoning is as before. If it does overlap a boundary, then we claim that the characters in z occur- ring to the right of the boundary must all be different from the characters in z to the left. This is because of the "wraparound phenomenon": the u strings are iden- tical, so the z characters to the right of the boundary are the same characters which occur to the right of the previous u-boundary. Since each u is a permutation of En, the claim holds. The same reasoning now applies to show that r-(zt) - E(z2). For if, say, zt has a charac- ter not occurring in z2, then one of the u-portions of the interchanged string wxz2yx will have two occurrences of that character, and thus not be in M(n~), as required by the IL. Without loss of generality, E(z) - {at ..... a~}. The number of strings in A is less than or equal to the number of ways of selecting one of the u strings. Consider the u string to the left of the boundary which z overlaps. Because of wraparound, this u string is still determined by selecting k positions in the z, and then choosing the characters in the remaining n - k positions. Thus we still have IAI < k! (n - k)! and we finish the proof as above. 4 Discussion What do Theorems I and 2 literally mean as far as linguistic descriptions are concerned? First, we notice that the permutation language P,~ really has s counting property: there is exactly one occurrence of each sym- bol in any string. The same is true if we consider, for fixed m, the strings of length mn in Mn, as n varies. Here there must be exactly m occurrences of each symbol in En, in every string. It seems unreasonable to require this counting property as a property of the sublanguage generated by any construction of ordinary language. For example, a list of modifiers, say adjectives, could allow arbitrary repetitions of any of its basic elements, and not insist that there be at most one occurrence of each modi- fier. So these examples do not have any direct, naturally occurring, linguistic analogues. It is only if we wish to describe permutation-like behavior where the number of occurrences of each symbol is hounded, but with an un- 114 bounded number of symbols, that we encounter difficul- ties. The same observation, however, applies to Barton's NP-cornpleteness result. Exactly the same counting prop- erty is required to make the universal recognition problem intractable. If we do not insist on an n-character alpha- bet, of course, then the universal recognition problem is only polynomial for ID/LP grammars; and correspond- ingly, there is a polynomial-size weakly equivalent CFG for each ID/LP grammar. But even with a growing al- phabet, it is still possible that direct ID/LP recognition is polynomial on the average. One way to check this pos- sibility empirically would be to examine long utterances (sentences) in actual fragments of free word-order lan- guages, to see whether words are repeated a large num- ber of times in those utterances. If there is a bound, and if all permutations are equally likely, then the above re- sults may have some relevance. It is definitely the case that speculations about the difficulty of processing these languages should be informed by more actual data. How- ever, it is equally true that the conclusions of a theoretical investigation can suggest what data to collect. 5 Proof of the IL Here we repeat the proof of the IL due to Ogden et al. It is an excellent example of the combinatory fact known as the Pigeonhole Principle. As we said, we want to en- courage more cooperation between theoretical computer science and linguistics, and part of the way to do this is to give a full account of the techniques used in both areas. First we restate the lemma. Interchange Lemma. Let G be a CFG or ID/LP grammar with fanout r, and with nonterminal alphabet N. Let m and n be any positive natural numbers with r < m <_ n. Let L(n) be the set of length n strings in L(G), and Q(n) be a subset of L(n). Then we can find a k-interchangeable subset .4 of ~(n), such that m/r < k _< m, and such that IAI >_ IQCn)I/(I.'Vl • rib. Proof. The proof breaks into two distinct parts: one involving the Pigeonhole Principle, and another involving an argument about paths in derivation trees with fanout r. The two parts are related by the following definition. Fix n, r, and m as in the statement of the IL. A tuple (j, k, B), where j and k are integers between i and n, and where B E N, is said to describe a string z of length n, if (i) there is a (full) derivation tree for z in G, having a subtree whose root is labeled with B, and the subtree exactly covers that portion of z beginning at position j, and having length k; and (ii) k satisfies the inequality stated in the conclusion of the IL. Notice that if one tuple describes every string in a set A, then, since G is context-free, A is k-interchangeable. The part of the proof involving derivation trees can now be stated: we claim that every string : in L(G) has at least one tuple describing it. To see that this is true, execute the following algorithm. Let z E L(G). Begin at the root (S) node of a derivation tree for :, and make that the "current node." At each stage of the algorithm, move the current node down to a daughter node having the longest possible yield length of its dominated subtree, while the yield length of the current node is strictly bigger than m. Let B be the label of the final value of the current node, let j be the position where the yield of the final value of the current node starts, and let k be the length of that yield. By the algorithm, k <_ m. If k < m/r, then since the grammar has fanout r, then the node above the final value of the current node would have yield length less than m, so it would have been the final value of the current node, a contradiction. This establishes the claim. Now we give the combinatory part of the proof. Let E and F be finite sets, and let J~ be a binary relation (set of ordered pairs) between E and F. R is said to cover F if every element of F participates in at least one pair of R. Also, we define, for e E E, R(e) = {f ] e R f}. One version of the Pigeonhole Principle can be stated as follows. Lemma 1 If R covers F, then there is an element e E E such that IR(e)l > [FI/IEI- Proof: Since R covers F, we know IFI _< ~ IR(e)l ere If ]R(e)[ < IFI/IEI for every e, then IFI < ~"~(IFI/lED = IFI, eEE a contradiction. Now let E be the set of all tuples (j, k, B) where j and k are less than or equal to n, and B E N. Then ]E[ = iN[. n 2. Let F = Q(n). Let e R f iff e describes f. By the first part of our proof, R covers F. Thus let e be a tuple given by the conclusion of the Pigeonhole Principle, and let A be R(e). The size of .4 is correct, and since e describes everything in A, then A is k-interchangeable. This completes the proof and the paper. References [1] Barton, G.E, Jr., The Computational Difficulty of ID/LP Parsing. Proc. 23rd Ann. Meeting of ACL , July 1985, 76-81. [2] Barton, G.E., Jr., R.C. Berwick, and E.S. Ristad, Computational Complezity and Natural Language. MIT Press, Cambridge, Mass., 1986. 115 [3] Gazdar, G. Klein, E., Pullum, G., and Sag, I., Gen- eralized Phrase Structure Grammar. Harvard Univ. Press, Cambridge, biass., 1985. [4] Gazdar, G., Applicability of Indexed Grammars to Natural Languages, CSLI report CSLI-85-34, Stan- ford University, 1985. [5] Hopcroft, J., and J. Ullman, Introduction to Automata Theory, Languages, and Computation, Addison-Wesley, Reading, Mass., 1979. [6] Kac, M., Manaster-Karner, A. and Rounds, W., Simultaneous-Distributive Coordination and Context-Freedom, Computational Linguistics, to ap- pear 1987. [7] Ogden, William, Rockford J. Ross, and Karl Winkl- mann, An 'interchange lemma' for context-free lan- guages. SlAM Journal of Computing 14.410-415, 1985. [8] Rounds, W., The Relevance of Complexity Results to Natural Language Processing, to appear in Process- ing of Linguistic Structure, P. Sells and T. Wasow, eds., MIT Press. [9] Rounds, W., A. Manaster-Rarner, and J. Friedman, Finding Formal Languages a Home in Natural Lan- guage Theory, in Mathematics of Language, ed. A. Manaster-Rarner, John Benjamins, Amsterdam, to appear. 116
1987
16
CONTEXT-FRKFNESS OF THE LANGUAGE ACCEPTED BY MARCUS' PARSER R. Nozohoor.-FarshJ School of Computing Sdence. Simon Fraser Unlversit3" Buruaby. British Columbia, Canada VSA 156 ABSTRACT In this paper, we prove that the set of sentences parsed by M~cus' parser constitutes a context-free language. The proof is carried out by construing a deterministic pushdown automaton that recognizes those smngs of terminals that are parsed successfully by the Marcus pa~er. 1. In~u~on While Marcus [4] does not use phrase mucture rules as base grammar in his parser, he points out some correspondence between the use of a base rule and the way packets are acuvated to parse a constmcu Chamlak [2] has also assumed some phrase structure base ~ in implementing a Marcus style parser that handles ungrammatical situations. However neither has suggested a type for such a grammar or the language accepted by the parser. Berwick [1] relates Marcus' parser to IX(k.0 context-free grammars. Similarly, in [5] and [6] we have related this parser to LRRL(k) grammars. Inevitably. these raise the question of whether the s~s=g set parsed by Marcus' parser is a context-free language. In this paper, we provide the answer for the above que'.~/on by showing formally that the set of sentences accepted by Marcus' parser constitutes a context-free language. Our proof is based on simulating a simplified version of the parser by a pushdown automaton. Then some modificauons of the PDA are suggested in order to ascertain that Marcus' parser. regardless of the s~a~mres it puts on the input sentences, accepts a context-free set of sentences. Furthermore. since the resulung PDA is a deterministic one. it conRrms the deterrmnism of the language parsed by this parser. Such a proof also provides a justification for a.~uming a context-free underlying grammar in automatic generation of Marcus type parsers as discussed in [5] and [6]. 2. Assumption of a finite size buffer Marcus' parser employs two data su'ucmres: a pushdown stack which holds the constructs yet to be completed, and a finite size buffer which holds the lookaheads. The iookaheads are completed constructs as well as bare terminals. Various operations are used to manipulate these data struaures. An "attentiun shift" operation moves a window of size k-3 to a given position on the buffer. This occurs in pazsing some constructs, e.g., some NP's, in par-dcul~ when a buffer node other than the first indicates start of an NP. "Restore buffer" restores the window to its previous position before the last "attention shift'. Marcus suggests that the movements of the window can be achieved by employing a stack of displacements from the beginning of the buffer, and in general he suggests that the buffer could be unbounded on the fight. But in practice, he notes that he has not found a need for more than five ceils, and PARSIFAL does not use a stack to implement the window or virtual buffer. A comment regar~ng an infinite buffer is in place here. An unbounded buffer would yield a passer with two stacks. Generally. such parsers characterize context-sensitive languages and are equivalent to linear bounded automa~ They have also been used for pa.mng some context-free languages. In this role they may hide the non-determinism of a context-free language by storing an unbounded number of lonkaheads. For example. LR-regular [3], BCP(m,n), LR(k.-) and FSPA(k) parsers [8] are such parsers. Furthermore, basing parsing decisions on the whole left contexts and k Iookaheads in them has often resulted in defining classes of context-free (context-sensitive) grammars with undecidable membership. LR-reguh~. IX(L=) and FSPA(k) are such classes. The class of GLRRL(k) grammars with unbounded buffer (defined in [5]) seems to be the known exception in this category that has decidable membership. Waiters [9] considers context--sensitive grammars with deterministic two--stack parsers and shows the undeddabiliD' of the membership problem for the class of such grammars. In this paper we assume that the. buffer in a Marcus style parser can only be of a finite size b (e.g.. b=5 in Marcus' parser). The limitation on the size of the buffer has two important consequences. First. it allows a proof for the context-freeness of the language to be given in terms of a PDA. Second, it facilitates the design of an effecuve algorithm for automatic generation of a parser. (However. we should add that: 1- some Marcus style parsers that use an unbounded buffer in a consu'ained way. e.g., by resuming the window to the krishtmost elements of the buffer, are equivalent to pushdown automata. 2- Marcus style parsers with unbounded buffer, similar to GLRRL parsers, can still be constructed for those languages which ale known to be context-free.) 117 3. Simplified parser A few reswictions on Marcus' parser will prove to be convenient in outli-i- 5 a proof for the context-freene~ of the language accepted by it. (i) Prohibition of features: Marcus allows syntactic nodes to have features containing the grammatical properties of the constituents that they represenL For implementation purposes, the type of a node is also considered as a feature. However, here a distinction will be made between this feature and others. We consider the type of a node and the node itself to convey the same concept (ke., a non-terminal symbol). Any other feature is disailowecL In Marcus' parser, the binding of traces is also implemented through the use of features. A trace is a null deriving non-termimJ (e.g., an NP) that has a feature pointing to another node, Le., the binding of the trace. We should mess at the outset that Marcus' parser outputs the annotated surface su'ucture of an utterance and traces are intended to be used by the semantic component to recover the underlying predicate/argument structure of the utterance. Therefore one could put aside the issue of trace registers without affe~ng any argument that deals with the strings accepted by the parser, i.e., frontiers of surface su'ucmre~ We will reintroduce the features in the generalized form of PDA for the completeness of the simulation. fib Non-acfessibilit~' of the oar~¢ tree; Although most of the information about the left context is captured through the use of the packeting mechanism in Marcus' parser, he nevertheless allows limited access to the nodes of the partial parse tree (besides the current active node) in the ac6on parts of the grammar rules. In some rules, after the initial pattern roaches, conditional clauses test for some property of the parse tree. These tests are limited to the left daughters, of the current active node and the last cyclic node (NP or S) on the stuck and its descendants. It is plausible to eliminate tree accessibility entirely through adding new packets and/or simple flags. In the simplified parser, access to the partial parse tree is disallowed. However. by modifying the stack symbols of the. PDA we will later show that the proof of context-freeness carries over to the general parser (that tests limited nodes of parse tree). (iii) Atomic actions: Action segments in Marcus' grammar rules may contain a series of basic operations. To simplify the mnulation, we assume that in the simplified parser actions are atomic. Breakdown of a compound action into atomic actions can be achieved by keeping the first operation in the original rule and inuoduclng new singleton packets containing a default pattern and a remaining operation in the a~on parx These packets will successively dea~vate themselves and activate the next packet much like "run <rule> next"s in PIDGIN. The last packet will activate the first if the original rule leaves the packet still active. Therefore in the simplified parser action segments are of the following forms: (1) Activate packetsl; [deactivate packets2]. (2) Deactivate packets1; [a~vate packets2]. (3) Attach ith; [deactivate packetsl]: [activate packets2]. (4) [Deactivate packetsl]: create node; activate packets2. (5) [Deactivate packets1]; cattach node: activate packets2. ~ (6) Drop; [deactivate packets].]; [activate packets2]. (7) Drop into buffer; [deactivate packetsl]; [activate packets2]. (8) Attention shift (to ith cell); [deactivate packetsl]; [a~vate packe~]. (9) Restore buffer; [deactivate packetsl]; [activate packets2]. Note that "forward attention shift has no explicit command in Marcus' rules. An "AS" prefix in the name of a rule implies the operation. Backward window move has an explicit command "restore buffer'. The square brackets in the above forms indicate optional parrs. Feature assignment operations are ignored for the obvious reason. 4. Simulation of the simplified parser In this s~'fion we construct a PDA equivalent to the simplified parser. This PDA recognizes the same string set that is accepted by the parser. Roughly, the states of the PDA are symbolized by the contents of the parser's buffer, and its stack symbols are ordered pairs consisting of a non-terminai symbol (Le.. a stack symbol of the parser) and a set of packets associated with that symbol Let N be the set of non-terminal symbols, and Y" be the set of terminal symbols of the pazser. We assume the top S node, i.e., the root of a parse tree, is denoted by So, a distinct element of N. We also assume that a f'L"~I packet is added to the PIIX3IN 8ranm~ar. When the parsing of a sentence is completed, the activation of this packet will cause the root node So to be dropped into the buffer, rather than being left on the stack. Furthermore, let P denote the set of all packets of rules, and 2/' the powerset of P, and let P.P~,P2.- be elements of 2/'. When a set of packets P is active, the pattern segments of the rules in these packets are compared with the current active node and contents of the viruml buffer (the window). Then the action segment of a rule with highest priority that matches is executed. In effect the operation of the parser can be characterized by a partial function M from a~ve packets, current active node and contents of the window into atondc actions, ke. M: 2~N(1)~fV (k) "* ACTIONS *Cauach" is used as a short notation for "create and attach'. 118 where V = N U ~, V(k)= V0+VI+_+Vk and AC"I'IONS is the set of atomic actions (1) - (9) discussed in the previous section. Now we can consu-act the equivalent PDA A=(Q2.r,r,6,qo,Ze,f) in the following way. Z = the set of input symbols of A, is the set of terminal symbols in the simplified parser. r = the set of stack symbols [X.P], where XeN is a non-terminal symbol of the parser and P is a set of packets. Q = the set of states of the PDA, each of the form <P~,P,,buffer>, where P~ and P~ are sets of packem. In general Pt and P: are erupt3" sets except for those states that represent dropping of a current a~ve node in the parser. Pt is the set of packets to be activated explicitly after the drop operation, and P~ is the set of those packets that are deactivated. "buffer" a suing in (](1)v)(m)[v(k), where 0~r~b-k The last vertical bar in "buffer" denotes the position of the current window in the parser and those on the left indicate former window positions. qo = the initial state = ¢~,~X>, where X denotes the null suing. f = the final state = <~.e~S,>. This state corresponds to the outcome of an activation of the final packet in the parser. In this way, i.e., by dropping the So node into the buffer, we can show the acceptance of a sentence simultaneously by empty stack and by final state. Z, = the start symbol - [S~,P~, where P, is the set of initial packets, e.~, {SS-Start, C-Pool} in Marcus' parser. 6 = the move function of the PDA, deemed in the following way: Let P denote a set of active packets, X an active node and WIW2...W n, n < k, the content of a window. Let o[WIW2...WnS be a suing (representing the buffer) Such that: ~ e ([(1) V)(b-k) and " fleV where Length(o WlW2_WnB)~b. and a' is the suing a in which vertical bar's are erased. ~on-),-move~; The non-X-moves of the PDA A correspond to bringing the input tokens into the buffer for examination by the parser. In Marcus' parser input tokens come to the attention of parser as they are needed. Therefore. we can assume that when a rule tests the contents of n cells of the window and there are fewer tokens in the buffer, terminal symbols will be brought into the buffer. More specifically, if M(P,X,W!...W n) has a defined value (i.e., P contains a packet with a rule that has pattern segment [X][W:t]_[Wn]), then (<e ,o ~lwz _w~ >,w3. ~.[ X.P] ) = (<o.O.a[WI-WjW3÷I>.[X.P]) for all a. and for j = 0, _, n--1 and Wj÷l eI'~. ),-moves: By 7,-moves, the PDA mimics the actions of the parser on successful matches. Thus the ~-function on ), input corresponding to each individual atomic action is determined according to one of the following cases, C~¢ (I) and (2): If M(P,X,W!W2...W n) = "activate PI; deactivate P2" (or "deactivate P2; activate P].'), then 6 (<~ ,~ ~[ w I w 2..w n B >A.[x.P]) = (<¢,¢,o[WIW2...Wn~>,[X,(P U PI)--P2]) for all a md B. Case (3): If M(P,X,WIW2_W:L-W n) = "attach ith (normally i is I); deactivate ])1; activate P2", then (<~ .0 ," I w1-.wt .-.Wn B >A .[x~'] ) - (<¢,¢,alW1...W£_iW£+1..WnB>. [X,(P 11 P2)-PI]) for all Cases (4) and ($): If M(P,X,WI_Wn)= "deactivate P1; create/cattach Y; activate P2" then 6 (<e .o a 1%..-.Wn B >A,[ x,P] ) = (<~,,,~lwz..wna>. [x,P-P1][Y~'2]) for ~u o and B. Case (6): If M(P.X,W1...W n) = "drop; deactivate P1; activate P2", then 6(<o,e,olW!_Wna>),,[X.P]) = (<P2,PlaIWI..WnS>,7`) for all o and B, and fm'thermore 6 (<P2'PI'a[ W1 -Wn B >,7`.[Y,P'~ ) " (<~,~. alWI..WnB>, [Y.(P' U P2)-PI]) for all a and 8, and Fe2 P. YeN. The latter move corresponds to the deactivation of the packets PI and activation of the packets P2 that follow the dropping of a curt'erie active node. Case (7): If M(P,X,WI-W n) = "drop into buffer; deactivate PI; activate P2", (where n < k), then 6(<,.,.,Iwl..Wna>.x.[xy]) - (<P2,PI,aIXWI..WnB>A) for all a and a, and furthermore 6 (~2 a'x ~1 xwz.-Wn a >A,[ Y~q ) - (<o,e,~IXW~..Wna>, [Y.(P' U P2)-P:].]) for all a and B. and for all P'eY and YeN. Case (8): If M(P.X.Wl..Wi...W n) = "shift attention to ith cell; deactivate PX; activate P2", then 6 (<o ,~ ~l wl-.w~ _w n a >A .ix.P] ) = (<,.e,alwl...~w£_WnB>. [x,(P v P2)-P1]) for all o and B. Case (9): If M(P,X,Wi...Wn)= "restore buffer; deactivate PI; a~vate P2", then 6 (<o .o ,a ,I o ,[ WX---Wn a >.X.[ X.P] ) = (<e,e,a,[a,Wl...Wna>. [X.(P U P2)-P1]) for all a,,,,, and S such that ¢~ contains no vertical bar. Now from the construction of the PDA, it is obvious that A accepts those strings of terminals that are parsed successfully by the simplified parser. The reader may note that the value of 6 is undefined for the "cases in which M(X,P,Wt_Wn) has multiple values. This accounts for the fact that Marcos' parser behaves in a deterministic way. Furthermore. many of the states of A are unreachable. This is due to the way we constructed the PDA, in which we considered activation of every subset of P with any active node 119 and any Iookahead window. 5. Simulation of the general parser It is possible to lift the resu'ictions on the simpLified parser by modifying the PDA. Here. we describe how Marcus' parser can be simulated by a generalized form of the PDA. fi) Non-atomic actions; The behaviour of the parser with non-atomic actions can be described in terms of M'eM*. a sequence of compositions of M. which in turn can be specified by a sequence 6' in 6". (ii) Accef~ibilirv 9f desefndants of current 8ctive node. and current cyclic node: What parts of the partial parse tree are accessible in Marcus' parser seems to be a moot point Marcus [4] states "the parser can modify or directly examine exactly two nodes in the active node stack.., the current active node aad S or NP node closest to the bottom of gacl¢... called the dominming cy¢lic node.., or... current cyclic node... The parser ia aLso free to exanune the descendants of these two nodex .... although the parser cannot modify them. It does this by specif)~ng the exact path to the descendant it wishes to examine." The problem is that whether by descendants of these two nodes, one means the immediate daughters, or descendants at arbiu'ary levels. It seems plausible that accessibility of immediate descendants is sufficient. To explore this idea, we need to examine the reason behind pardal tree accesses in Marcus' parser. It could be argued that tree accessibility serves two purposes: (I) Examinin~ what daughters are attached to the current active node considerably reduces the number of packet rules one needs to write. (2) Examining the current cyclic node and its daughters serves the purpose of binding traces. Since transformations are applied in each transformat/onal cycle to a single cyclic node, it seems urmecessary to examine descendants of a cyclic node at arbitrarily lower levels. If Marcus' parser indeed accesses only the immediate daughters (a brief examination of the sample grammar [4] does not seem to conwadict this): then the accessible part of the a parse tree can represented by a pair of nodes and their daughters. Moreover, the set of such pairs of height--one trees are finite in a grammar. Furthermore, if we extend the access to the descendants of these two nodes down to a finite fixed depth (which, in fact seems to have a supporting evidence from X theory and C-command), we will still be able to represent the accessible pans of parse trees with a finite set of f'mite sequences of fixed height trees, A second interpretation of Marcus' statement is that descendants of the current cyclic node and current active node at arbium-ily lower levels are accessible to the parser. However, in the presence of non--cyclic recussive constructs, the notion of giving an exact path to a descendant of the current a~ve or current cyclic node would be inconceivable; in fact one can argue that in such a situation parsing cannot be achieved through a i'mite number of rifle packets. The reader is reminded here that PIDGIN (unlike most programming languages) does not have iterative or re, cursive constructs to test the conditions that are needed under the latter interpretation. Thus, a meaningful assumption in the second case is to consider every recursive node to be cycl/c, and to Limit accessibility to the sobtree dominated by the current cyclic node in which branches are pruned at the lower cyclic nodes. In general, we may also include cyclic nodes at fixed recursion depths, but again branches of a cyclic node beyond that must be pruned, in this manner, we end up with a finite number of finite sequences (hereafmr called forests) of finite trees represenung the accessible segments of partial parse uee~ Our conclusion is that at each stage of parsing the accessible segment of a parse tree. regar~ess of how we interpret Marcus' statement, can be represented by a forest of trees that belong to a finite set Tlc,h. Tlc,h denotes the set of all trees with non-termirml roots and of a maximum height h. In the general case, th/s information is in the form of a forest. rather than a pair of trees, because we also need to account for the unattached subtrees that reside in the buffer and may become an accessible paxt of an active node in the future. Obviously, these subtrees will be pruned to a maximum height h-1. Hence, the operation of the parser can be characterized by the partial function M from active packets, subtrees rooted at current acdve and cyclic nodes, and contents of the window into compound actions, i.e.. M: Y'X(T,, h u [_x.})xCrc, h u ,Xl)XCr+t,h.~. u zY k) "* ACTIONS where TC, h is the subset of "IN, h consisting of the trees with cyclic roo~ In the PDA simulating the general parser, the set of stack symbols F would be the set of u'iples [T¥,Tx,P], where T¥ and T x are the subtrees rooted at current cyclic node Y and current ac~ve node X, and P is the set of packets associated with X. The states of this PDA will be of the form <X.P~.P2,huffer>. The last three elements are the same as before, except that the buffer may now contain subtrees belonging to TlC,h. 1. (Note that in the simple case. when h=l. TIC,hol=N). The first entry is usually ), except that when the current active node X is dropped, this element is changed to T' x. The subu'ee "I x is the tree dominated by X. i.e., T X. pruned to the height h-1. Definition of the move function for this PDA is very similar to the simplified case. For example, under the 120 assumption that the pair of height-one trees rooted at current cyclic node and current active node is accessible to the parser, the det'mition of 6 fun~on would include the following statement among others: If M(P,Tx,T¥,W!_Wn) - "drop; deactivate PZ; activate P2" (where T x and T¥ represent the height--one trees rooted at the current active and cyclic nodes X and Y), then 8(<X,e,~.=[W3.-W1B>. k.[Ty.Tx,P]) = (<X,P2,PI,alWz_WIa>,X) for all a and 8. Furthermore, _6(<XJ'2,Pz~lwz..wla>. X,[Ty.TzJ"]) - (<x¢¢.o~Wz..wza>. [Ty.Tz,(r u P2)-Pz]) for all (TzY) in TN,IX2~ such that T z has X as its rightmmt leaf. In the more general case (i.e., when h > 1). as we noted ha the above, the first entry in the representation of the state will be T' x, rather than its root node X. In that case, we will replace the righonost leaf node of T Z, i.e., the nonterrmnal X, with the subtree T' x. This mechanism of using the first ent23." in the representation of a state allows us to relate attachments. Also, in the simple case (h=l) the mechanism could be used to convey feature information to the higher level when the current active node is dropped. More specifically, there would be a bundle of features associated with each symbol. When the node X is dropped, its associated features would be copied to the X symbol appea.tinll in the state of the PDA (via first _8-move). The second _8-move allows m to copy the features from the X symbol in the state to the X node dominated by the node 7_ (iii) Accommodation of fC2tur~$; The features used in Marcus' parser are syntactic in nature and have f'mite domains. Therefore the set of" attributed symbols in that parser constitute a finite set. Hence syntactic features can be accommodated in the construction of the PDA by allowing complex non-terminal symbols, i.e., at-a'ibuted symbols instead of simple ones. Feature assitmments can be simulated by .replacing the top stack symbol in the PDA. For example, under our previous assumption that two height-one trees rooted at current active node and current cyclic node are accessible to the parser, the definition of _8 function will include the following statement: If M(P,Tx:A,T¥:B,Wl...Wn) = "assign features A' to curt'erie active node; assign features B' to current cyciic node; deactivate Pl; activate P2" (where A,A',B and B' axe sets of features). then _6(<x~.o l wz...w z B >~, [% ...T x :A~']) = (<k'~'~'~lWl"Wla>' [TY:e U B"Tx:A It A ',(P U P2)-Pz]) for all ° and 8. Now, by lifting all three resuictions introduced on the simplified parser, it is possible to conclude that Marcus' parser can be simulated by a pushdown automaton, and thus accepts a context-free set of suing.s. Moreover, as one of the reviewers has suggested to us. we could make our result more general if we incorporate a finite number of semantic tests (via a finite or°de set) into the parser. We could still simulate the parser by a PDA. Farthermore, the pushdown automaton which we have constructed here is a deterministic one. Thus, it confirms the de--in+sin of the language which is parsed by Marcus' mechanism. We should also point out that our notion of a context-free language being deterministic differs from the deterministic behavour of the parser as described by Marcus. However, since every deterministic language can be parsed by a deterministic parser, our result adds more evidence to believe that Marcus' paner does not hide non-determinism in any form. It is easy to obtain (through a standard procedure) an LR(1) grammar describing the language accepted by the generalized PDA. Although this grammar will be equivalent to Marcus' PIDGIN grammar (minus any semantic considerations). and it will be a right cover for any undetl.ving surface grammar which may be assumed in consu'ucting the Marcus parser, it will suffer from being an unnatural description of the language. Not only may the resulting structures be hardly usable by any reasonable sernantic/pragmatics component, but also parsing would be inefficient because of the huge number of non-teminals and productions. In automatic generation of Marcus-style parsers, one can assume either a context-free or a context-sensitive grammar (as a base grammar) which one feels is naturally suitable for describing surface structures. However, if one chooses a context--sensitive grammar then one needs to make sure that it only generates a context-free language (which is unsolvable in general). In [5] and [0"J, we have proposed a context-free base grammar which is augmented with syntactic features (e.g., person, tense, etc.) much like amibuted grammars in compiler writing systems. An additional advantage with this scheme is that semantic features can also be added to the nodes without an extra effort. In this way one is also able to capture the context-sensitivity of a language. 6. Conclusions We have shown that the information examined or modified during Marcus parsing (i.e., segments of partial parse trees, contents of the buffer and active packets) for a PIDGIN grmm'nar is a finite set. By encoding this information in the stack symbols and the states of a deterministic pushdown automaton, we have shown that the resniting PDA is equivalent to the Marcus parser. In this way we have proved that the set of surface sentences accepted by this parser is a context-free set. An important factor in this simulation has been the assumption that the buffer in a Marcus style parser is bounded. It is unlikely that all parsers with unbounded buffers written in 121 this Style can be simulated by determiuistic pushdown automata. Parsers with unbounded buffers (i.e., two--stuck pa~rs) are used either for recognition of context--sensitive ignguages, or if they parse context-free bmguases, possibly W hide the non-determinism of a language by storing an ~ted number of lookabeads in the buffer. However, ~ does not mean that some Marc~-type parsers that use an unbounded buffer in a conswained way are not equivalent to pushdown automata. Shipman and Marcus [7] consider a model of Marcus' parser in which the active node s~ack and buffer are combined w give a single data suuctme that holds both complete and incomplete sub~ees. The original stack nodes and their lcokaheads aJtemately re~de on ~ s'u'ucum~. Letting an n,limited number of completed conswacts and bare terrnlr'21~ reside on the new su~cmre is equivalem to having an unbounded buffer in the original model Given the resmcuon that auadunents and drops are always limited to the k+l riLzhUno~ nodes of this data structure, it is possible to now that a parser in this model with an unbounded buffer s~ can be simulated with an orrllns~. pushdown autotoaton. (The equivalent condition in the originaJ model is to res~a the window to the k rightmost elemmts of the hurler. However simuiation of the singte structm'e ptner is much more su-aightforw'ard.) ACKNOWI.£DGEM~"rs The author is indebted to Dr. Lcn Schubert for posing the question and ~.J'ully reviewing an eazly dr~ of This paper, and to the referees for their helpful comments. The resecrch reported here was supported by the Nann'zl Scionces and Engineerinl~ Research Council of Canada operating [m~nr, s A8818 and 69203 at the universities of Alberta and Simon Fraser. REFt~t'~ICES [1] R.C Berw/ck. The Aequistion of S.vlm~¢ Kmwle~. MIT Press. 1985. [2] E Charniak. A paxser with something for everyone_ Parsing natural Iongua~. ed. M. King. PP. 11"/-149. Academic Press, London. 1983. [3] IC Cuiik H and P,. Cohen. I.R-regular grJmrnar~: an extension of LR(k) gr~mm*,s. Join'hal of Compmer sad S.ntm Sciem~, voL 7, pp. 66-96. 1973. [4] M.P. Marcu~ A Theory. of Syatactic Rece~itioe for Natural Langnal~ MIT Press, Cambridge, MA. 1980. [5] P,- NozohonPFwJ~L LRRL~) ~ • left m tiSh~ pa.,~g uchn/que with n~duced look~ead~ Ph.D. thed.~ Dept of Compmin~ Science, Umverdv/of Alberta. 1986` [6] R. Nozohoor"Ftrdl/. On form~ll,ltions of Mau¢l~' ~. COL/NC-86` 1986. [7] D.W. Shipman and M.P. Maxcm. Towards minimal dam for demTnln~'nc ~ IJCAI-~. 1979. [8] T.G. Szymamk/ and LH. Wali,,,,~ N ~ ex~m/uns of bouom-up parting techniques. SIAM Jmnal of Computing. voL 5. ~ Z PP. 231-'..50. June 1976. [9] D.A. Walte~ Dem~/nistic conwxPsem/tive languages. Information and Control. voL 17. pp. 14-61. 1970. 122
1987
17
SEMANTIC STRUCTURE ANALYSIS OF JAPANESE NOUN PHRASES WITH ADNOMINAL PARTICLES Akira SHIMAZU, Shozo NAITO, and Hirosato NOMURA Basic Research Laboratories, N.T.T. 3-9-11, Midori-cho, Musashino-shi, Tokyo 180, Japan Abstract Japanese has many noun phrase patterns of the type A no B consisting of two nouns A and B with an adnominal particle no. As the semantic relations between the two nouns in the noun phrase are not made explicit, the interpretation of the phrases depends mainly on the semantic characteristics of the nouns. This paper describes the semantic diversity of A no B and a method of semantic analysis for such phrases based on feature unification. 1. Introduction Japanese has many noun phrase patterns of the type A no B. The noun phrase pattern, which consists of two nouns A and B with an adnominal particle no, and which has at least the same ambiguity as B of A (and some additional ambiguities not found with the equivalent English construction), does not express any explicit semantic relations between the two nouns. Consequently, its interpretation depends mainly on the semantic characteristics of the nouns. Further- more, phrase patterns NI no N2 no ... no Nn often appear. Because the number of possible dependencies between the constituents is 2 "'I (2n-3)l! / n!, semantic analysis of such phrases is necessary to resolve the ambiguities. To date, there have been no adequate analyses for this linguistic phenomenon, nor have there been any clear methodological proposals for its semantic analysis. This paper describes a) the semantic diversity of A no B, b) the analysis of the semantic structure for A no B by a unification-based method of semantic function application, c) typical semantic structures of A no B, d) the possibility of paraphrasing A no B as a noun phrase with a relative clause by the addition of a verb, and e) the resolution of ambiguities using contextual informa- tion from the viewpoint of relation between A no B and its corresponding relative clause. Although A no B is a simple fo~n, it is interesting in two respects. First, A no B represents a general linguistic problem for semantic processing. The reason is that, in some cases, A or B is a noun form derived from a verb or adjective, thus necessitating the seman- tic processing of verbs and adjectives. Second, A no B can be paraphrased as a noun phrase with a relative clause, in just the same way as some English complex nominals [3, 5]. Putting it another way, as information is condensed into a simple expression, there are ambiguities as to the semantic relations between the two nouns. Consequently, contextual analysis plays a crucial part in the resolution of the ambiguities. 2. Semantic Diversity of A no B A no B is frequently found in Japanese sentences. An exsmlnation of scientific and newspaper articles showed that the occurrence of A no B accounts for about half the total number of noun phrases in a text ill]. The other occurrences are noun phrases with relative clauses, and coordinated noun phrases. In construc- tions of the type A no B, A or B can represent either a simple noun, as in Taroo no ie ("raro's house"), NP of the same A no B pattern, as in kariforunia no shuto no jinko ("the population of the capital of California"), or NP with a relative clause, as in Watashi ga atta hito no na ("the name of the person who I met"). There is also a fourth pattern involving an additional particle such as kara, made, de and so on, as in Tookyoo kara no densha ("the train from Tokyo"). This paper deals mainly with constructions of the first type, though the method presented here is also applicable recursively to patterns of the second and third types: this is possible because in such constructions, the semantic features of A (i.e. X no Y, or SY) derive from its head (Y). In the fourth type, analysis is slightly less straightforward, because the particle does provide some additional useful information. A no modifies a head B to restrict or clarify the referencetl. 21 of B. In the example Sutanfoodo daigaku no kyooju ("professor at Stanford University"), Sutan- foodo daigaku ("Stanford University") restricts and clarifies the range of reference for kyooju ("professor"). Such A no B constructions can be classified seman- tically into five main groups according to the character- istics of A and B, as shown in Table 1. The five main groups can be further classified into a total of about 80 semantic relations. In the study mentioned above [III, the authors examined about ten thousand examples of A no B occurrences, and checked the semantic relations. The appendix shows the semantic relations together with examples. It is necessary to analyze these seman- 123 Table I Five main groups by the semantic classi.qcation of A no B 1. B functions as a predicate semantically, and A is its argument. /care no renhi (:0~ ¢) ~, ~is love") B: ren'ai (.~,~, "love ~) ... action, A: kate (~. "he") ... agent of the action 2. B functions as a case role such as location, and is restricted relatively by A. gakkoo no nine (.-----~ ¢) ~. "front of a school') B: nine (~, "front"/"oefore'} ... location/time, A: gakkoo (~. "school') ... object 3. B is an attribute of A. hako no omosa (;U ¢3 t ~ , "weight of a box') B: omosa(lt ~, "weight"} ... attribute, A: kako(R, 'q0ox-) ... object 4. B is an argument of a predicate functioned semantically by A. sanpo no him (~ ¢) ~,, "man who strolls') B: Aim (/~, "man') ... agent, A: sanpo (['~. "strolls") ... action 5. A is a kind of an attribute value orB. kooennoki(~--~o)YK, "tree in a park') B: ki (~, "Wee') ... object, A: kooen (~ [], "park") ... value of an object's attribute location tic relations in such detail in order to produce good quality machine translation from Japanese into Eng- Lish among other tasks. To date, linguistic processing has not entailed such a detailed classification. The semantic structure of A no B is generally a function of the meanings of A and B, but the processing is not just a simple computation based on the semantic contents of A and B. For instance, when B functions as a predicate semantically, there is a case relation between A and B. However, there are no syntactic clues such as a case particle, unlike in full sentences. Hence, it is necessary to consider the semantic characteristics of A and B in order to analyze the semantic structure. Processing of context [12] is generally necessary to determine the correct semantic structure of A no B uniquely, as A no B is often ambiguous if considered out of context. For instance, in the case of Ft~ransujin no hanashi ("speech of a Frenchman"), there are two possible semantic relations for Furansujin ("French- man"): i.e. as agent or content of hanashi ("speech"). 3. Semantic Structure Analysis of A no B 3.1 Analysis by Function Application The semantic structure of A no B is generally analyzed from A and B by "semantic function application", which is similar to the idea of function application in the CUG framework (categorial unification grammar) 14. za], viewing either A or B as a functor, and the other as its argument. (functor left/right) = (argument) (functor result) = (semantic-structure) From a different viewpoint, this is a generalization of the method of case frame analysis in which the analysis of the semantic structure of a verb-plus-noun phrase is based on the case-frame of the verb. That is, when a verb as a functor is applied to a noun phrase as its argument, if the noun phrase and a slot of the case- frame unify, the semantic structure is obtained as a result of assigning the relevant information from the noun phrase to the slot- So, the analysis is a kind of semantic treatment using the unification-based method. In this view, the case frames correspond to subcategorization frames, and the analysis corresponds to unifications applied to a subcategorization frame Is, s] Characteristics of the function-based analysis are mainly to express input-output relations clearly, and to put stress on a lexical-based method. As the meaning of A no B depends on the individual A and B, it follows that each lexical entry must have information regarding its "functionality". This is also the method adopted in CUG. Furthermore, these functors, arguments, and resulting semantic structures are represented as sets of at1~ribute-value pairs, again as in CUG. This is also similar to frame representa- tions found in AI. The set of attribute-value pairs associated with a functor noun and an argument noun are generally represented as in Figure 1, and will be called a "semantic structure". The characteristics of these structures are described in Section 3.3. In the representation, the attributes left and right indicate an argument for a functor word and a position (direc- tion), and the values represent conditions imposed on the argument. Syncat, semcat and sense indicate syn- tactic, semantic and head word meaning respectively. Marker indicates the case particle found as a post-posi- tion with the noun phrase. Pred gives semantic condi- tions which restrict and clarify the relation between A syncat: < syntactic.features :> semcat~ < semarUic-feoJures :> sense: < word.senae > marker. < c~e-partic[e :> leR: NONE right: syncat: <~ syntactic.features > semcat: < semantic-features > sense: [] pred: < cuae.rmme >: syncat: (syncat) semcat: (semcac) sense: (sense) case: < syntactic -cc~se-name :> marker: (marker) result: syncat: <: syntactic-[eatures ~> semcat: <semarutic-features > sense: < word-senae marker: [] pred: (right pred) Figure la Format for a functor noun having an argument at its right 124 syncat: semcat: sense: marker: left: right: pred: < syntactic-features > < semantic-features > < word-sense > < case.particle > NONE NONE rel: argl: <predicate-name > syncat: < syntactic-features > semeat: < semantic-features > sense: [] default-marker: < default.case.particle > marker: < case.particles > argn: syncat: < syntactic-features > semcat: < semantic-features > sense: [] default-marker: < default.case.particle > marker. < case.particles > Figure lb Format for an argument noun and B. Result shows sets of attribute-value pairs obtained by the semantic function application. In the representation, words in parentheses such as (syncat) and (right pred) are path notations and are used to point to a value in the manner of an index notation Isl. 3.2 Semantic Structure Analysis of A no B The noun phrase A no B is regarded as a composi- tion of A no and B. Therefore, A no B is composed of A no and B by the function role of either A no or B. Which of A no or B has a function role depends on syntactic and semantic characteristic as described in section 3.3. Then A no is regarded as being constructed from A and no. Accordingly, the semantic structure of A no B is analyzed as follows: First, the functor no gets argu- ment A, and makes a noun phrase A no with the semantic characteristics inherited from A. Secondly, the functor A no or B gets an argument B or A no respectively and makes a noun phrase A no B with the semantic characteristics inherited from B. The analy- sis process is shown as follows. (1) functor: no, argument: A, result: Ano (2) functor: Ano, argument: B, result: AnoB, or functor: B, argument: Ano, result: AnoB In the case of A p no B (where p is an additional par- ticle), A and p are combined first. The semantic struc- ture of A p is almost the same as that of A no except for the additional information derived from the marker p. After this, the final semantic structure is composed in the same way as for A no B. This paper focuses mainly on the analysis process after constituents of A no B have been found, and does not pay specific attention to the method of how constituents are found, for which purpose the active chart parsing method is used. With regard to the composition of A no, we take the choice giving no the functor role from the viewpoint of generality, although it is possible to view A as having this role. No has a functor role that shifts character- istics and functions of A to the semantic structure of A no, and adds a marker feature to the semantic structure of A no. The representation of no is shown in Figure 2. In the analysis of A no B, the semantic characteris- tics and functions of A and B weigh heavily, because although there is an adnominal case particle no, it is semantically rather neutral compared with other case particles. To put it another way, case particles usually function as explicit indicators of the preferred semantic interpretation. This fact suggests the significance of studying the method of analysis of A no B. When A no has a functor role, the functor must get B as its argument and extract a semantic relation between A and B. For example, in guruupu no shuukai ("meeting of a group"), guruupu no modifies an action nominal and makes a result semantic structure indicating the semantic relation (agent) as in Figure 3. In the representation >pred indicates a constraint that an argument must have a pred feature. The main semantic category of A no B is generally taken from the head B of A no" B. However, in some cases the semantics of B are different from those of A no B, and it is necessary to change the semantic cate- syncat: p sense: no(c), no) left: syncat: {n np} semcat: [] sense: [] marker: no left: NONE right: [] result: [] right: NONE result: syncat: np semcat: (left semcat) sense: (left sense) marker: no left: NONE right: (left right) result: (left result) Figure 2 syncat: semcat: sense: left: right: result: Figure 3a Semantic structure of a particle no n animate guruupu ( ~" ~t~ - -f , group) NONE syncat: semcat: sense: > pred: {np n} [] [] []: syncat: semcat: sense: np (right semcat) (right sense) (right pred) syncat: semcat: sense: pred: np animate (sense) Semantic structure of gruupu ("group"] 125 syncat~ semcat: sense: marker. left right: result: Figure 3b np loc • gruupu(~'%,- "/, group) no NONE syncat: ~mcat: senso-" >pred: syncat~ Semcat: Sense: lz~l: ~p~ {action thing} [] []: syncat: semca~ sen6e: default-marker: marker, no np (fight semeat) (right sense) (right pred) Semantic structure of gru~pu no np foe (sense) de syncat: Semcat. Sense: marker. [] left: NONE right~ NONE pred: reh agent: Figure 3e n action shuuAai (~ ~, meeting) held-meeting syncat" {np n} semcar animate sense: [] case: stlbj dei'ault-marker: ga marker:. {ga no *} Semantic structure of shuukai ('meeting") syneat: Semcat~ sense: pred: np action shsuAa/(~ =, ~ meeting) reh held-meeting agent: syncat: np semcat: animate sense: &uruupu ( ~" ;t, -- "t , group) case: suhj default-marker: ga marker:, no Figure 3d Semantic structure of gruupu no shuuAai ('meeting of a group') gories. For example, heita," ("soldier") is animate, but oraocka no heitai ("toy soldier") is not. Therefore omocAa no has the function of changing the semantic category of the head which it modifies. Such a function is obtained by a kind of overwriting unification 19! 3.3 Semantic Structures in Five Main Groups The characteristics of the semantic structures in the f~ve ma/n groups are as follows. [Case 1] In this case, B, which is the nominal form of a predicate (a verb or an adjective), functions as an ar~ument~ and A, which is a semantic case argument of B, functions as a functor. Notice that when B functions semantically as a predicate, there are two alternatives for the assignment of the functor role. The first is that the predicate word functions as the functor. The second is the reverse L41. This paper adopts the latter way mainly because of the characteristic of free word order in a Japanese sentence. The semantic structure of A and A rw is almost the same except for a marker feature, and has the following functor role: when A no is an obligatory case (argu- ment) of the predicate B, A no unifies with the argument feature of" B. When A no is an optional case (adjunct), the semantic structure of A no is added to that of B as an optional case by unification. The functor role is added to A by a kind of lexical rule. Ez~mples are shown in Figures 3 and 4. [Case 2 and Case 3] In these cases, B represents a kind of case role or attribute respectively, which functions as a predicate. So, functionality is given to A in the same way as described above. Examples are shown in Figures 5 and 6. [Case 4] The reverse case of Case 1, that is, A is the nominal Form of" a predicate, and B is the semantic case element of the predicate. So B is a functor and A no is its argument in the reverse way. The example is shown in Figure 7. Kooen ("park") in the example gets an argument in the opposite direction to that of example 4. The phrase in this case corresponds to a noun phrase with a relative clause. So, a feature embedded is used in the representation, that is, it means that the pred feature is introduced from the complement. [Case 5] Semantic relations in this case are classified mainly into three types : a) relational restriction such as a human relation, b) attributive restrict/on such as a kind relation and c) situational restriction such as a location relation. (a) relational restr/ction: This case includes the rela- tionships between humans, organizations, and whole- part relations. Generally a predicate role is given to B and a functor role is given to A in the same way as Case 1. An example is shown in Figure 8. In the example, sensei ("teacher") has a pred feature and is an argu- ment of the functor watasA~ ('I"). (b) attributive restriction: A has attributive character- istics such as quantity, kind, degree, and property, and B is generally a thing. As A functions as a kind of pred- " icate, a predicate feature is assigned to A. An example is shown in Figure 9 with kooshifima r~o n,,no Ccheck- ered-pattern cloth"), where kooshijurna has a pred fea- ture and is an argument of the functor ~,,no ("cloth"). (c) situational restriction: A has situational meanings such as location, time, source, destination, purpose, and method, and restricts B by the situation. Like the relational restriction case, B is assigned a predicate feature, and A a functor role as shown in Figure 10. In the example, doozoo ("oronze statue") has a pred fea- ture and is an argument of the functor kooen ("park"). 126 Akira SHIMAZU syncat: semcat: sense: marker: left~ right: result: Figure 4a syncat: semcat: sense: pred: Figure 4b n loc kooen (~ [~, park) [] NONE syncat: {n rip v vp} semcat: [] sense: [] right: [] > pred: Io¢: syncat: np semcat: Io¢ sense: (sense) default-marker. marker:. (marker) syncat: np semcat: (right semcat) sense: (right sense) pred: (right pred) de Semantic structure of kooen ("park') np action shuukai (~1~ ~, meeting) tel: held-meeting agent: syncat: np semcat~ animate sense: [] case: subj default-marker, ga marker:. {ga no *} loc: syncat: np semcat: 1o¢ sense: kooen (~Y. ~, park) default-marker, de marker: no Semantic structure of kooen no shuukai ("meeting in a park") syncat: semcat: sense: pred: np loc mae ('~, front) rel: be object: syncat: semcat: sense: case: default-marker: marker: no np loc biru ( ~ Jt~, building) subj ga Figure 5 Semantic structure of biru no mae ("front of a building") 3.4 Organization of Lexical Information To assign an appropriate semantic structure to a noun, the following characteristics must be considered: a) A or B which works as a predicate in some cases works as a modifier (argument or adjunct) of a predi- cate in the other cases, as with kenkyuu ("research", "study") in the example gengo no kenkyuu ("study of language") and kenkyuu no kaishi ("start of the research"). Therefore, A or B generally has both roles of a predicate and a modifier. b) When there are several no's in a noun phase such as syncat: semcat: sense: pred: Figure 6 np attribute takasa ( ~ ~, height) rel: have object: syncat: np semcat: animate sense: yama (ILl, mountain) case: subj default-marker: ga marker: no attribute: syncat: " (np) semcat: (semcat) sense: (sense) case: obj default-marker: o marker. * Semantic structure ofyama no takasa ("height of a mountain") syncat: semcat: Sense: marker: embedded: Figure 7 np loc kooen ( ~ [], park) [] pred: rel held-meeting agent: syncat: {n np} semcat: animate sense: [] case: subj default-marker: ga marker: {ga no .} loc: syncat: np semcat: loc sense: (sense) default-marker: ga marker: • Semantic structure of shuukai no kouen ("park where people meet") syncat: np semcat: animate sense: sen,sei (~: ~__., marker: [] pred: rel: agent: teacher) teach syncat: (syncat) semcat: (semcatJ sense: (sense) case: subj default-marker: ga marker: * recipient: syncat: np semcat: animate sense: watash~ (~L, I) case: dative default-marker: ni marker: no object: syncat: {n np} semcat: [] sense: [] case; obj default-marker: o marker: no Figure 8 Semantic structure of watashi no sertsei ("my teacher") A no B no C, there are several possibilit/es as to the word dependency structure. There are two principal 127 s),ncat: semcat: selIse" marker. embedded: Figure 9 tl state n~nc (~,., cloth) [] pred: rel: object: checkered-pattern syucat: np semcat: thing sense: (sense) default-marker: ga marker. * Semantic structure of ~olhijima no nuno ('¢.heckered-pattarn cloth ~) syncat: semcat: sense: marker: pred: Figure 10 np thing doozoo (~ ~, bronze statue) [l tel: be object: syneat: np semcat: th/ng sense: (sense) case: subj default*marker: ga marker.. * loc: syncat: np semcat: loc sense: kooen ('~[~, park) case: dative default-marker: ni marker:, no Semantic structure of kooen no doozoo ('bronze statue in a park') possibilities: ((Ano B) no C) as in, for example, jiyuu no raegami no shashin ("photograph of the Statue of Liberty"), and (Ano (Brm C)) as Kariforunia.san no jooshitsu no kome ("rice of fine qaulaity from California"). Thus, the middle noun (B) may relate to the words on either side (A and C), or to only the right- hand word (C). In the ~rst case, the middle noun may be an argument of the predicate on both sides. In the latter case, the right,hOSt word C may be an argtunent of each predicate to the left, the number of which is not in general restricted. c) There are two cases of (A no (B no C)). When C is a nominal predicate, A and B might be separate arg~nents as in Kinoo no Taroo no Sanpo ("raro's walk of yesterday"). When C is an ordinary noun, however, the analysis is further complicated by the fact that implicit predicates such as location, possession, attribution etc., are involved, For example, in Tookyoo no NTT no biru ('~rrr's building in Tokyo"), the inner predicate structure for NTT no bits ("NTT has a building") is attached to the appropriate argument of the outer predicate Tookyoo no biru Cbuilding is in Tokyo"). From the characteristics described above and the method for assigning a functor role to an axg~nent of a predicate, we adopt the method that a funcmr role is added to a constituent by a kind of lexical rule before function application. In general, several candidate constituents are made by ~he feature structure pre- formation. For example, at the stage ofAnoB - Ano B, when B is a functor and has a meaning such as location, time and so on, two solutions for B are offered as candidates: one as an argument of Ano, which works as a predicate, and the other as an adjunct. 4 Correspondence between A no B and the Sentence 4.1 Paraphrase of A no B as a Noun Phrase with a Relative Clause The expression A no B can be paraphrased into A p V B or A' B, adding an appropriate particle p and verb / adjective V, or reforming A to a verbal form A' if appropriate. Both A p V and A' are relative clauses. The paraphrased expression is more informative and some of the ambiguity is resolved. Paraphrases of A no B in Case 1 - Case 4 are rather easy, as added verbs/adjectives do not depend so much on context as compared with Case 5. Noun phrases with a relative clause for each case in the A no B classification are shown in Table 2. Such paraphrases are obtained by a change from a verb-centered to a noun-centered view. A no B is gener- ally related to some event or state in a discourse, and the event or state is represented by an appropriate predicate: pred(A, B). By taking a noun-centered view, the representation is transferred into a representation A [pred(A(*), B)], that is, A in pred(A, B). The expression that gives the corresponding predi- cate is taken from the value of the pred attribute in the semantic structure. A noun phrase paraphrased with a relative clause is generally constructed as follows: 1) the head B is put first, 2) a verb is chosen based on the rel attribute, and put to the left orB, 3) a noun phrase corresponding to the appropriate case role as given by the argument structure of the predicate, is constructed from A and the particle indicated by a default-marker. and put to the left of the verb. For instance, in zoo no omosa ("weight of an elephant"), first, the head omosa is taken; second, verb rnotsu ("nave") is taken from a value of rel, and put to the left of omosa; third, the agent zoo ga ("elephant") is put to the left of omosa. In this way, the desired complex noun phrase zoo ga motsu omosa ("weight that an elephant has") is arrived at. 4.2 On Disambiguation by Contextual Information Although A no B is semantically ambiguous, it can generally be disambiguated by contextual information. Although inferences including association and analogy are generally necessary, this paper briefly mentions the possibility of the disambiguation method by unification 128 Table 2 Noun phrase with a relative clause for each case in the A no B classification [Case1] ..* ApVB p: ga / o / de / ni (case particles), V: suru ("do") I ohonau ("do") / okoru ("happen") hare no hehhon ("his marriage") -~ bare ga suru kehkon ('marriage that he performs") [Case2] --, A p V B p: ga / o (case particles), V: aru ("be') / suru ('do") / shita ('done") /e no ma~ ('front of a house") --* iegaaru mae ("front of a place where a house is') [Case3] -* A ga motsu B ("B which has A") ishi no omosa ('weight of a stone") --* ishi ga motsu omosa ("weight which a stone has') [Case4] -* A o suruB ("B'whichdoA") sanpo no hito ("person who strolls") -~ sanpo o suru hito ("person who strolls") [CaseS] ~ ApVB p: n.i I ga I ham / no tame ni (particles), V: aru ("be in") / motsu ('have') / tsuhurareru ("be made') / ohosu ("cause") ~oen no doozoo ("statue in a park") -b hoo~n ni aru doozoo ("statue which is in a park') between a predicate structure in A no B semantic structure and the related event structure in the discourse. A sequence of related events is described in a discourse. On the other hand, the semantic structure is represented by an appropriate predicate feature. From these, the correct structure can be obtained by unifying an event semantic structure with a predicate feature in A no B as follows. event-semantic-structure-in-context - pred-structure-in-semantic-structure-of-A no B Here, "-" means that the left hand side unifies with the right hand side. Ambiguities of A no B may result from amibiguities regarding the predicates that could be added, ambiguities in the words themselves, or ambiguous case relations. The disambiguation process is illustrated below using an example in which the added predicates are ambiguous. Generally, a verb-centered semantic structure is extracted from a sentence. For the sentence, (sl) Hanako wa kyonen e o k.aita. ('~Hanako painted a picture last year.") the following semantic structure is obtained. This representation is simplified, showing only the information needed for the explanation. pred: [reh paint agent: Hanako object: picture] This semantic structure can be obtained also from the noun-centered semantic structure as follows. picture [pred: reh paint agent: Hanako object: picture(*)] Next, let us assume that the sentence (s2) occurs in the context of (sl). (s2) Hanako no e wa tenrankai de yuushoo shita. ("The picture of Hanako/Hanako's picture won the first prize in an exhibition.") Hanako no e ("the picture of Hanako" or "Hanako's picture") is ambiguous when taken out of context, with a range of possible semantic relations including possession, purchase, producer, and content. However, the ambiguity is resolved by unifying the semantic structure of the previous sentence with each of the semantic structures representing the possible semantic relations: the only semantic structure which can be successfully unified has the producer relation. 5. Remarks This research concerns semantic structures, especially those of noun phrases, and was conducted as part of a series of research efforts in the LUTE (Language Understander, Translator, & Editor) project [e, 7. I0, nl To date, ten thousand examples of A no B have been collected from scientific and newspaper articles, and the appropriateness of the classification of A no B investigated. In addition, as a preliminary experiment, a semantic relation analysis was tried with about a thousand examples, with rather satisfactory results. The meaning of A no B is generally ambiguous, and contextual information is needed to resolve the ambiguities. There seems to be variety of such ambiguities relating to contextual information, but in principle such ambiguities are considered to be resolved by assuming appropriate predicates as described in this paper. Acknowledgment The authors wish to thank Dr. Harold Somers for some helpful suggestions. References [1] Appelt, D. E., "Some Pragmatic Issues in the Planning of Definite and Indefinite Noun Phrases," in Proceedings of the 23rd Annual Meeting of the ACL, 1985. [2] Grosz, B.J., A. K. Joshi, and S. Weinstein, "Providing a Unified Account of Definite Noun Phrases in Discourse," in Proceedings of the 21st Annual Meeting of the ACL, 1983. [3] Isabelle, P., "Another Look at Nominal Compounds," in Proceedings of Coling '84, 1984. [4] Karttunen, L., "Radical Lexicalism," in M. Baltin and A. Kroch (eds.), Alternative Conceptions of Phrase Structure, 1986. [5] Levi, J. N., The Syntax and Semantics of Complex Nominals, Academic Press, 1978. 129 [6] Naito, S., A. Shimazu, and H. Nomura, "Classifi- cation of Modality Function and its AppLication to Japanese Language Analysis," in Proceedings of the 23rd Annual Meeting of the ACL, 1985. [7] Nomura, w., S. Naito, Y. Katagiri, and A. Shimazu, "Translation by Understanding: A Machine Translation System LUTE," in Proceed- ings of Coling '86, 1986. [8] Sells, P., Lectures on Contemporary Syntactic Theories: An Introduction to Gomzrnment.Binding Theory, Generalized Phrase Structure Grammar, and LericaI-Functional Grammar, CSLI Lecture Notes Series, No. 3, 1985. [9] Shieber, S. ]YL, ,An Introduction to Unification. Based Approaches to Grammar, CSLI Lecture Notes Series, No. 4, 1986. [10] Shimazu, A., S. Naito, and H. Nomura, "Japanese Language Semantic Analyzer based on an Extended Case Frame Model," in Proceedings of the Eighth International Joint Conference on Artificial Intelligence, 1983. [11] Shimazu, A., S. Naito, and ]=[.Nomura, "B ~t~ ~iR ¢)~- ~&~-]~R • ~,t~, t: (Classifica- tion of Semantic Structures in Japanese Sentences with Special Reference to the Noun Phrase)," ~ ~:~-_.~.-~, ~..~..~~r~~-~47-4 (Informa- tion Processing Society of Japan, Natural Lan- guage Special Interest Group Technical Report No. 47-4), 1985. [12] Sidner, C. L., "Focusing and Discourse," Discourse Processes 6, pp. 107-130, 1983. [18]Uszkoreit, H., "Categorial Unification Gram- mars," in Proceedings of Coling '86, 1986. Appendix Semantic relations between St and 8 in St no 8 [Case1] 1. agent ... ssnmoaka no chyoosa ("study by experts"), 2. objects ... amamori no hoshuu ('repairs of roof leaks"), 3. tangent ... gaikokujin to no fureai ('contact with foreigners'), 4. donor .../~are no purezento ('his present'), 5. receiver ... hata no meiwaku ("inconvenience to others'), 6. method ... den.sha no tsuugaku ('attending school by train'), 7. instrument ... eigo no toi ("the English question"), 8. material ... sa~arm no ~-2oori ('cooking of fish"), 9. reason ... issanteatar#so no yogore ("carbon monoxide contamination"), 10. time ... haru no yakyuu.kenbutsu ('watching baseball in the spring'), 11. location ... kooen no deeto ('date in a park'), 12. source ... kuukoo kara no shuppat~u ('departure from an airport'), 13. destination ... jiyuu • no kikyuu ("desire for freedom"), 14. goal ... iruka no hogo no tame no seitai-choosa Cecological research to protect dolphins'), 15. situation ... warui teahi no ryokoo ("trip in bad weather'), 16. content ... kakkai seijooha no har~shiai ("talks for Diet normalization"), 17. role ... hahn toshite no hataraki {"role as a mother"), 18. manner ... guu.zen no itchi ("simple coincidence'), 19. frequency ... nijukkai no chuusha ('20 injections"), 20. ratio ... san wari no dageki ("batting at .300"), 21. degree ... ooguchi no kenkin (*large contributions"), 22. number ... 9,700 man'en ao kikin ("¥97million in contributions"). [Case2] 1. location ... yama no ue ("above the mountain"), 2. time :.. shokuji no ato ('after lunch"), 3. range ... hookoku no ruzka ('in a report"), 4. direction ... fuae no shinto ('course of the ship"), 5. goal ... kane no tame ("for money"), 6. reason ... r~kki no sei ("due to the heat'), 7. situation ... kinkyuu no baai ('in case of emergency'), 8. manner ... keakoa nojoota/("state of health') ,9. result ... soosenkyo ao kekka ("result of the general elections"), I0. object ... u~tashitaehi no boo [wa...l (" ... on our part"). [Case3] 1, size ... mona no fulcasa ('depth of things'), 2. color ... sh/zen no ira ('natural colors'), 3. temparature ... rmzn~su no atsuaa ('the heat of mid-summer"), 4. form ... ningea no sugata ('human figure'), 5. function ... ~iazokulei no seiaoo ('performance of an artificial leg"), 6. name ... mature-/no na ('name of a festival-},7, role ... sooch/no yakuwari ('the role of the device"), 8. age ... son, ha no aem'ei ('age of a player'), 9. number ... yes6/no aedan ('prices of vegetables"), 10. order ... purosgto no shuppauu.jun~ ("Alain Prost's starting position"), 11. ratio ... nihoa no juubua'noichi ('one-tenth the population of Japan'). [Case4] 1. agent ... chooleoku.shuuri no shokuaintachi ('artisans repairing sculptures'}, 2. object ... ka~i no banish/('hypothetical story"),3. method ... kaiket~u no shudan ('way to solve it'), 4. instrument ... seikai.koosaku no bu&i ('weapon for political transactions'), 5. material ... shooset$u no zQiryoo ('data for a novel"), 6. reason ... fiko no gen'in ('cause o£ an accident") ,7. location ,.. chuusha no basho ('parking space'), 8. time ... tsuki.chakuriku ao usa ('morning of the lunar module landing on the moon'), 9. source ... shuppatsu no kuulcoo (=airport of departure'), 10. destination ... h/~n no yaomote ('target of criticism'), II. direction ... hazsha no hookoo ('launching direction"}, 12. goal ... kaitei no nerai ('aim of the revision"), 13. frequency ... shigeki no kaLsuu ('the number of times of stimulation'), 14. manner ... kyoodooseilmtsu no tanoshisa ('enjoyment of community living'), 15. degree ... un'ei ao muzu/eazhisa ("dimculties of the operation'), 16. ratio ... daigaku- sotsu no wax/ai ('the percentage of college graduates'}, 17. number ... shi~hutsu no gaku ("the sum of the expenses'). [CaseS] 1. possesion ... taroo no hon ('Taro's book'), 2. belong-to ... ~tanfoodo-daigaku no ttyooju ('professor at Stanford University"), 3. human-relation ... seito no chichioya ('father of a student'), 4. whole-part ... hoteru no he3~ ('a room of a hotel"), 5. part-whole ... futa~u/¢i no hako ('box with a lid'), 6. number ... shichinin no shin.shi ('seven gentlemen'), 7. age ... juunisai no musume san ('12-yearn old girl'}, 8. order ... saigo no hitori ('the last one"), 9. kind ... tennen no shiba ('natural turin), 10. role ... puroyakyuu no seashu ("professional baseball players'), 11. degree ... futsuu no hito ("an average person'), 12. characteristics ... yakoosei no mushi ('nocturnal insects'), 13. material ... eakabiniiru sei no shibafu ('vinyl chloride turf'), 14. reason ... tabako no gai ('effects of smoking'), 15. producer ... GM no jidoosha ("GM car"), 16. loca- tion ... gaikoku no tomodachi ('friends in a foreign country"), 17. time ... rnu/cashi no hitobito ('men of old times'), 18. source .. yuujin kaxa no tegami ("letter from a friend"), 19. destination ... kagaku e no aet~ui ('enthusiasm for sciences"), 20. situation ... aremoyoo no hibi ('days of stormy weather"), 21. goal ... koonyuu no tame no gaika ('foreign exchange needed to purchase ... "), 22. content ... haiku no hon ("a book of haiku"), 23. reference ... sorera no mondai ("problems of this kind", 24. specification ... tokutei no raise ("particular stores"). 130
1987
18
NOMINALIZATIONS IN PUNDIT Deborah A. Dahl, Martha S. Palmer, Rebecca J. Passonneau Paoli Research Center UNISYS Defense Systems 1 Defense Systems, UNISYS P.O Box 517 Paoli, PA 19301 USA ABSTRACT This paper describes the treatment of nomi- nalizations in the PUNDIT text processing system. A single semantic definition is used for both nomi- nalizations and the verbs to which they are related, with the same semantic roles, decomposi- tions, and selectional restrictions on the semantic roles. However, because syntactically nominaliza- tions are noun phrases, the processing which pro- duces the semantic representation is different in several respects from that used for clauses. (1) The rules relating the syntactic positions of the constituents to the roles that they can fill are different. (2) The fact that nominailzations are untensed while clauses normally are tensed means that an alternative treatment of time is required for nomlnalizations. (3) Because none of the argu- ments of a nominallzation is syntactically obllga- tory, some differences in the control of the filling of roles are required, in particular, roles can be filled as part of reference resolution for the nomi- nalization. The differences in processing are cap- tured by allowing the semantic interpreter to operate in two different modes, one for clauses, and one for nominalizations. Because many noml- nalizations are noun-noun compounds, this approach also addresses this problem, by suggest- ing a way of dealing with one relatively tractable subset of noun-noun compounds. 1Formerly SDC-A Burroughs Company. 1. Introduction In this paper we will discuss the analysis of nominalizations in the PUNDIT text processing system. 2 Syntactically, nomlnalizations are noun phrases, as in examples (I)-(7). (1) An inspection of lube oil filter revealed metal particles. (2) Lou of lube oll preuure occurred during operation. (3) SAC received hifh ueafe. (4) In~eeti#ation revealed adequate lube oil. (5) Request replacement of SAC.. (6) Erosion of impellor blade tip is evident. (7) Unit has low output air pressure, resulting in ale*# gae turbine atarte. Semantically, however, nominaliTatlons resemble clauses, with a predlcate/argument structure like that of the related verb. Our treatment attempts to capture these resemblances in such a way that very little machinery is needed to analyze nomi- nalizations other than that already in place for other noun phrases and clauses. There are two types of differences between the treatment of nomlnalizatlons and that of clauses. There are those based on linqui~tle differences, related to (1) the mapping between syntactic arguments and semantic roles, which is I The research described in this paper was supported in part by DARPA under contract N000014-85-C-0012, admin- istered by the Office of Naval Research. APPROVED FOR 131 different in nomlnalisations and clauses, and (2) tense, which nomlnallsations lack. There are also differences in control; in particular, control of the filling of semantic roles and control of reference resolution. All of these issues will be discussed in detail below. 2. Clause analysis The semantic processing to be described in this paper is part of the PUNDIT s system for processing natural language messages. The PUN- DIT system is a highly modular system, written in Prolog, consisting of distinct syntactic, semantic and discourse components. ~-lirschman1985], and~-lirschman1986], describe the semantic com- ponents of PUNDIT, while ~)ah11986, Palmer1988, Passonneau1986], describe the semantic and pragmatic components. The semantic domain from which these examples are taken is that of reports of failures of the starting air compressors, or sac's, used in starting gas turbines on Navy ships. The goal of semantic analysis is to produce a representation of the information conveyed by the sentence, both implicit and explicit. This involves 1) mapping the syntactic realization onto an underlying predicate argument representation, e.g., assigning referents of particular syntactic consltuents to predicate arguments, and 2) mak- ]Jig implicit argument fillers expllclt. We are using an algorithm for semantic interpretation based on predicate decomposition that integrates the performance of these tasks. The integration is driven by the goal of filling in the predicate argu- ments of the decomposition.~almer1986]. In order to produce a semantic representa- tion of a clause, its verb is first decomposed into a semantic predicate representation appropriate for the domain. The arguments of the predicates constitute the SEMANTIC ROLES of the verb, which are slml]ar to cases 4 For example, fall decomposes into become inoperatlve, with patient as its only semantic role. Semantic roles can be filled either by a syntactic constituent or by reference PUBLIC RELEASE, DISTRIBUTION UNLIMITED. s PUNDIT UNDderstands and Integrates Text 4 In this domain the semantic roles include: agent, In- stigator, experiencer, Instrument, theme, Ioeatlon, actor, patient, source, reference_pt and goal. There are domain specific criteria for selecting a range of semantic roles. The criteria which we have used are described resolution from default or contextual information. We have categorized the semantic roles into three classes, based on how they are filled Seman- tic roles such as theme, actor and patient are syntactically OBLIGATORY, and must be filled by surface constituents. Semantic roles are categor- ized as semantically ESSENTIAL when they must be filled even if there is no syntactic constituent avaUahle, s In this case they can be filled pragmat- ically, making use of reference resolution, as explained below. The default categorization is NON-ESSENTIAL, which does not require that the role be filled. The algorithm in Figure 1 produces a semantic representation using this information. Each step in the algorithm will be illustrated at least once in the next section using the following (typical) CASREPS text. ~a© failed. Pump sheared. Ineestifatiort reeealed metal eontamlnation in filter. 2.1. A Simple Example DECOMPOSE VERB - The first example uses the fall decomposition for Sac failed: fall <- beeomeP (inoperatlveP (patlent(P))). It indicates that the entity filling the OBLIGA- TORY patient role has or will become inopera- tive. FOR patient ROLE - PROPOSE SYNTACTIC CONSTITUENT FILLER - A mapping rule indicates that the syn- tactic subject is a likely filler for any patient role. The mapping rules make use of intuitions about syntactic cues for indicating semantic roles first embodied in the notion of case ~lllmore1968,Palmer1981]. The mapping rules can take advantage of general syntactic cues like "SUBJECT goes to PATIENT" while still indicat- ing particular context sensitivities. (See ~al- mer1985] for details.) in{Paseonneau198611 s We are in the process of defining criteria for categoriz- ing a role as ~SSeNTIAL. It is clearly very domain dependent. 132 CALL REFERENCE RESOLUTION - See is the subject of ma© failed, and is suggested by the mapping rule as a 1Lkely filler of the patient role. At this point the semantic interpreter asks noun phrase analysis to provide a unique referent for the noun phrase subject. Since no sac, have been mentioned previously, a new name is created: sael. TEST SELECTION RESTRICTIONS - In addi- tion to the mapping rules that are used to associ- ate syntactic constituents with semantic roles, there are selection restrictions associated with each semantic role. The selection restrictions for fail test whether or not the filler of the patient role is a mechanical device. A sac is a mechani- cal device so the subject of the sentence mac failed maps straightforwardly onto the patient role, e.g., beeomeP (inoper at|veP (pat|ent (sac1))). Since there are no other roles to be filled the algorithm term~-ates successfully at this point and the remaining steps are not applied. The next example illustrates further steps in the algo- rithm. 2.2. Unfilled Obligatory Roles The second utterance in the example, Psmp mheared, illustrates the effect of an unfilled obliga- tory role. DECOMPOSE VERB - shear, <- eauseP (!nstigator (I), beeomeP(shearedP (patlent(P)))) Sheer is an example of a verb that can be used either transitively or intransitively. In both cases the patient role is filled by a mechanical device that becomes sheared. If the verb is used transi- tively, the instigator of the shearin¢, also a mechanical device, is mentioned explicitly, as in, The rotating driee shaft sheared the psmp. If the verb is used intransitively, as in the current example, the instigator is not made explicit; however, the algorithm begins by attempting to fill it in. FOR Instigator ROLE - Working from left to right in the verb decomposition, the first role to and relies heavily on what can be assumed from the context. be filled is the instigator role. A mapping rule indicates that the subject of the sentence, psmp, is a likely filler for this role. Reference resolution returns pump1 as the referent of the noun phrase. Since pump is a mechanical device, the selection restriction test passes. FOR patient ROLE - There are no syntactic constituents left, so a syntactic constituent can- not be proposed and tested. UNFILLED OBLIGATORY ROLES - The patlent role, a member of the set of obligatory roles, is still unfilled. This causes failure, and the binding of p,*rnpl to the instigator role is undone. The algorithm starts over again, trying to fill the instigator role. FOR instigator ROLE- There are no other mapping rules for instigator, and it is non- essential, so Case 4 applies and it is left unfilled, e The algorithm tries again to fill in the patient role. FOR patlent ROLE - Two mapping rules can apply to the patient role, one of which suggests the subject, in this case, the pump, as a filler. Reference resolution returns pump1 again, which passes the selection restriction of being a mechan- ical device. The final representation is: eauseP (instl gator (I), beeomeP (shearedP (patlent (pumpl)))). The last sentence in the text, "Inveatlga- tion re~ealed metal eontaminatlon ~n filter," is interesting mainly because of the occurrence of two nomlnallzations which are discussed in detail in a separate section. 2.3. Temporal Analysis of Tensed Clauses The temporal component determines what kind of situation a predication denotes and what time it is asserted to hold for ~assonneau1988]. Its input is the semantic decomposition of the verb and its arguments, tense, an indica- tion of whether the verb was in the perfect or progressive, and a list of unanalyzed consti- tuents which may include temporal adverbials. It generates three kinds of output: an assignment of IIn other domains, the instigator might be an ~SSZN. TLU. role and would get filled by pragmatics. 133 an actual time to the predication, if appropriate; a representation of the type of sRuation denoted by the predication as either a state, a process or a transition event; and finally, a set of predicates about the ordering of the time of the situation with respect to other times explicitly or implicitly mentioned in the same sentence. For the simple sentence, sac /'ailed, the input would consist of the semantic decomposition and a past tense marker: Deeomposltlons become (|no per ative (p atlent (is sell ) )) 3Terb forms Past The output would be a representation of a transitional event, corresponding to the moment of becoming inoperative, and a resulting state in which the sac is inoperative for some period initiating at the moment of transition. 8. Nomlnallsatlons Nominallzations are processed very slml]arly to clauses, but with a few crucial d~erences, both in linguistic information accessed and in the con- trol of the algorithm. The first important linguis- tic characteristic of the nom;nallzation algorithm is that the same predicate decomposition can be used as is used for the related verb. Secondly, d~erent mapping rules are required since syntac- tically a nominallsatlon is a noun phrase. For example, where a likely filler for the patient of fail, is the syntactic subject, a llkely filler for the patient of failure is an of pp. Thirdly, nominal- isations do not make use of the obligatory classification for semantic roles, since noun phrase modifiers are not syntactically obligatory. In terms of d~rerences in control structure, because nom;nallzations may themselves be ana- phorlc, there are two separate role-filling stages in the algorithm instead of just one. The first pass is for filling roles which are explicitly given syntacti- cally; essential roles are left unfilled. If a uomi- nalization is being used anaphorically some of its roles may have been specified or otherwise filled when the event was first described. The ana- phorlc reference to the event, the nomina]izatlon, would automatically inherit all of these role This suggests the hypothesis that OBLIGATORY roles For clause decompositions automatically become BSSeNTL~ roles for nominalization decompositions. This hypothesis seems to hold in the current domain; however, it will have to be tested on other domains. We are indebted to James Allen for this observation. fillers, as a by-product of reference resolution. After the first pass, the interpreter looks for a referent, which, if found, will unify with the noml- nalisatlon representation, sharing variable bind- ings. This is a method of filling unfilled roles prag- matically that is not currently available to clause analysis s. However, the first pass was important for filling roles with any explicit syntactic argu- ments of the nom;nalizatlon before attempting to resolve its reference, since there may be more than one event in the context whkh nominallza- tion could be specifying. For example, failure of pump and failure of sac can only be dis- tinguished by the filler of the patient role. After reference resolution a second role-filling pass is made, where still unfilled roles may be filled prag- matically with default values in the same way that unfilled verb roles can be filled. S.1. Temporal Analysis of Nomlnallza- tlons As with clauses, the temporal analysis of norninallsatlons takes place after the semantic analysis. Also as with clauses, one of the inputs to the temporal analysis of nomlna]isatlons is the semantic decomposition. The critical d~erence between the two cases is that a nom;nalisation does not occur with tense. PUNDIT compensates by looking for relevant temporal information in the superordinate constituents in which the nomi- nalizatlon is embedded. Currently, PUNDIT processes nomlnalizatlons in three types of con° texts. The first context for which a nomlnalisation is temporally processed is when it occurs as the prepositional object of a temporal connective (e.g., before, during, after) and the matrix clause denotes an actual situation. For example, in the sentence sac lube oil pressure decreased belato 60 pslg after engagement, the temporal component processes the main clause as referring to an actual event which happened in the past and which resulted in a new situation. When PUNDIT finds the temporal adverbial phrase after engagement, it assumes that the engage- meat also has actual temporal reference. In such cases, the nomlnalisat|on is processed using the ! Clauses can describe previously mentioned events, as discussed in [Dahl1987]. In order to handle cases like these, something analogous to reference resolution for clauses may be required. However a treatment of this has not yet been implemented in PUNDIT. 134 meaning of the adverb and the tense of the main clause. The second context in which a nominallza- tion undergoes temporal analysis is where it occurs as the argument to a verb providing tem- poral information about situations. Such verbs are classified as aspectual. Occur is such a verb, so a sentence like failure occurred would be pro- cessed very s~miIarly to a clause with the simple past tense of the related verb, i.e., aomethlng faile& Another type of verb whose nominallzation arguments are temporally processed is a verb which itself denotes an actual situation that is semantically distinct from its arguments. For example, the sentence in,aestlgatlon re~ealed metal ¢onfam~natlon i~t oil filter mentions three situations: the situation denoted by the matrix verb reveal, and the two situations denoted by its arguments, ineemt~gatlon and eontamlnatlo~ If the situation denoted by reveal has actual tem- poral reference, then its arguments are presumed to as well. 8.2. Nominallsatlon Mapping Rules We will Use the previous example, ineestl- gatlon revealed metal eontamlnatlon in filter, to illustrate the nom~nallsation analysis algo- rithm. We will describe the eontamlnatlon example first, since all of its roles are filled by syntactic constituents. The dotted llne divides the algorithm in Figure 2 in the Appendix into the parts that are the same (above the line), and the parts that differ (below the llne.) DECOMPOSE VERB - Contaminate decomposes into a NON-ESSENTIAL instrument that contam- inates an OBLIGATORY loeatlon. eontaminate <- eontaminatedP (instrument (I), loeatlon(L)) FOR instrument role - In the example, metal is a noun modifier of contamination, and metall is selected as the filler of the instrument role. FOR theme ROLE - The theme of a nominaU- nation can be syntactically realized by an of pp or an in pp. The role is filled with fllterl, the referent of/~l£er. At this point the temporal component is called for the nomlnalisation metal eontamlnatlon in oll filter with two inputs: the decomposition struc- ture and the tense of the matrix verb, in this case the simple past. Because this predicate is stative, the representation of the eontamlna- tlon situation is a state predicate with the decomposition and a period time argument as well as the unique identifier S, (which will be eventually be instantiated by reference resolution as [eontaminatel]): state(S, eontamlnatedP (instrument (metall), ]oeatlon(filterl)), (perlod(S)) In this context, the past tense indicates that at least one moment within the period of contamina- tion precedes the time at which the report was filed. CALL REFERENCE RESOLUTION FOR NOlV[I- NALLZATION - There are no previously men- tioned ©ontamlnatlon events, so a new referent, eontamlnatlonl is created. There are no unfilled roles, so the analysis is completed. 8.3. Filllng Essential Roles The analysis of the other nominallzation, in~emtlgatlon, illustrates how essential roles are filled. The decomposition of investigate has two semantic roles, a NON-ESSENTIAL agent doing the investigation and an OBLIGATORY theme being investigated. 9 investigate <- investlgateP (agent (A) ~ theme(T)) There are no syntactic constituents, so the map- ping stage is skipped, and reference resolution is called for the nominallzatlon. There are no previ- ously mentioned investigative events in this exam- ple 10, so a new referent, investigat|onl is created. At this point, a second pass is made to attempt to fill any unfilled roles. I In other domains, the theme can be essential, as in "I heard a noise. Let's investigate." I0 If the example had been, A sew ea¢iseer isweetl- gate& tAe pump. TAe isteetlgntios oeeurre~ just before tAe complete breakdown., a previously mentioned event would have been found, and the agent and theme roles would have inherited the fillers engineer1 and pnmpl from the reference to the previous event. 135 FOR agent ROLE - The role is NON-ESSENTIAL, so Case 4 applies, and it is left unfilled. FOR theme ROLE - The selection restriction on the theme of an ineestlgation is that it must be a d*msged component or a dauaage causing event. All of the events and entities mentioned so far, the ,ae and the pump, the failsre of the sac and the shcar/ng of the pump satisfy this cri- teria. In this case, the item in focus, the ,hear- ing of the pump, would be selected ~)ah11986]. The final decomposition is: investlgateP (agent(A),theme(shearl)) 4. Other Compounds In addition to nom~nalisations, PUNDIT deals with three other types of noun-noun com- pounds. One is the category of nouns with argu- ments. These include preuure and temperature, for example. They are decomposed and have semantic roles like nominalisations; however, their treatment is different from that of nomlualisa- tions in that they do not undergo time analysis, since they do not describe temporal situations. As an example, the definition of preuure, pressureP (theme(T),loeation(L)), specifies theme and location as roles. The analysis of a noun phrase like sa© oil preuure would fill in the loeatlon with the sac and the theme with the oil, resulting in the final representation, pressur eP (theme(oill),loeatlon(sael)). The syntactic mapping rules for the roles permit the theme to be filled in by either a noun modifier, such as all in this case, or the object of an o/ prepositional phrase, as in prcuure o/oil. Siml- larly, the mapping rules for the location allow it to be filled in by either a noun modifier or by the object of an in prepositional phrase. Because of this flexibility, the noun phrases, sac all pres- mute, all preuure in sac, and pressure of oi1 in sac, all receive the same analysis. The second class of compounds is that of nouns which do not have semantic roles. For these, a set of domain-specific semantic relation- ships between head nouns and noun modifiers has been developed. These include: area of object, for example, blade tip, materlal-form, such as metal partlclea; and mater|al-objeet, such as metal eyllnder. These relationships are assigned by examining the semantic properties of the nouns. The corresponding prepositional phrases, as in tip o/ blade, particle, o/ metal, and cylinder of metal, have a similar analysis. Finally, many noun-noun compounds are handled as idioms, in cases where there is no rea- son to analyze the semantics of their internal structure. Idioms in the CASREPS domain include ,hip, force, gear *hair, and connecting pin. Our decision to treat these as idioms does not imply that we consider them unanalyzable, or noncompositional, but rather that, in this domain, there is no need to analyze them any further. 5. Previous Computatlonal Treatments Previous computational treatments of nomi- nalizations differ in two ways from the current approach. In the first place, nominallzations have often been treated simply as one type of noun- noun compound. This viewpoint is adopted by ~inin1980,Leonard1984,Brachman(nuli)]. Cer- tainly many nomlnalizations contain nominal premodifiers and hence, syntactically, are noun- noun compounds; however, this approach obscures the generalization that prepositional phrase modifiers in non-compound noun phrases often have the same semantic roles with respect to the head noun as noun modifiers. PUNDIT's analysis is aimed at a uniform treatment of the semantic s~ml]arlty among expressions like repair of enflne, enf~ne repair, and Csomeone) repaired englne rather than the syntactic similarity of engine repair, sir preuure, and metal partl- eles. Of the analyses mentioned above, Brachman's analysis seems to be most similar to ours in that it provides an explicit link from the nominalization to the related verb to relate the roles of the noun to those of the verb. The second way in which our approach differs from previous approaches is that PUNDIT's analysis is driven by taking the semantic roles of the predicate and trying to fill them in any way it can. This means that PUNDIT knows when a role is not explicitly present, and consequently can call on the other mechanisms which we have described above to fill it in. Other approaches have tended to start by fitting the explicitly mentioned arguments into the role slots, thus they lack this flexibility. 6. L|mltat|ons The current system has two main limita- tions. First, there is no attempt to build inter- nal structure within a compound. Each nominal modifier is assumed to modify the head noun unless it is part of an idiom. For this reason, 136 noun phrases like impel[or blade t~p erosion cannot be handled by our system in its current state because impel[or b[a,le tip forms a semantic unit and should be analysed as a a single argument of eroaion. The second problem k related to the first. The system does not now keep track of the relative order of nora|hal modifiers. In this domain, this does not present serious problems, since there are no examples where a different order of modifiers would result in a d~erent analysis. Generally, only one order is acceptable, as in mac oil eo~taminatlon, ~o~[ both powerful and extenslble, and which will pro- vide a natural basis for further development. Acknowledgements We would like to thank Lynette Hirschman and Bonnie Webber for their helpful commments on this paper. 7. Conclus|ons In this paper we have described a treatment of nom~nalisatlons ill which the goal ls to maxim- [se the s~m~]arities between the processing of nom- inallsatlons and that of the clauses to whkh they are related. The semantic s~m~]arltles between nom~nallzatlons and clauses are captured by mak- ing the semantic roles, semantk decompositions, and selectional restrictions on the roles the same for nomlna]isations and their related verbs. As a result, the same semantk representation k con- structed for both structures. This s~m;|arity in representation in turn anows reference resolution to find referents for nom;nallsations whkh refer to events previously described in clauses. In addl- tion, it allows the time component to integrate temporal relationships among events and situa- tions described in clauses with those referred to by non~uaUsations. On the other hand, where d~erences between nom~uaUsations and clauses have a clear ]ingulstic motivation, our treatment provides for differences in processing. PUNDIT recognizes that the semantic roles of non~na]ised verbs are expressed syntactically as modifiers of nouns rather than arguments of clauses by having a d~erent set of syntactic mapping rules. It ls also true in nominallsatlons that there are no syntac- ticaUy obligatory arguments, so the analysis of a nom;nallsation does not fall when there is an unfilled obligatory role, as is the case with clauses. Finally, the temporal analysis component is able to take into account the fact that nomlnallzatlons are untensed. ~rh;le there are many cases not yet covered by our system, in general, we believe this to be an approach to processing nomlnallsatlons which is 137 APPENDIX DECOMPOSE VERB; FOR EACH SEMANTIC ROLE CASE I: IF THERE ARE SYNTACTIC CONSTITUENTS - PROPOSE SYNTACTIC CONSTITUENT FILLER CALL REFERENCE RESOLUTION & TEST SELECTIONAL RESTRICTIONS CASE 2: IF ROLE IS OBLIGATORY AND SYNTACTICALLY UNFILLED - FAIL CASE 3: IF ROLE IS ESSENTIAL AND UNFILLED - CALL REFERENCE RESOLUTION TO HYPOTHESIZE A FILLER & TEST SELECTIONAL RESTRICTIONS CASE 4: IF ROLE IS NON-ESSENTIAL AND UNFILLED - LEAVE UNFILLED CALL TEMPORAL ANALYSIS ON DECOMPOSITION FIKure 1. Clause AJtalysls AlKorlChm DECOMPOSE NOMINALIZATION FOR EACH SEMANTIC ROLE: IF THERE ARE SYNTACTIC CONSTITUENTS - PROPOSE SYNTACTIC CONSTITUENT FILLER & CALL REFERENCE RESOLUTION & TEST SELECTIONAL RESTRICTIONS CALL TEMPORAL ANALYSIS ON DECOMPOSITION CALL REFERENCE RESOLUTION FOR NOMINALIZATION NOUN PHRASE FOR EACH SEMANTIC ROLE: IF ESSENTIAL ROLE AND UNFILLED CALL REFERENCE RESOLUTION TO HYPOTHESIZE A FILLER TEST SELECTIONAL RESTRICTIONS ELSE LEAVE UNFILLED FJKure 2. Nomlnallsa~ion Analysis AIKorlthm 138 REFERENCES ~rachman(nuU)] Ronald J. Brachman, A Structural Paradigm for Representing Knowledge. In BBN Report No. S605, Bolt Beranek & Newman, Cambridge, Massachusetts. ~ah11980] Deborah A. Dahl, Focusing and Refer- ence Resolution in PUNDIT, Presented at AAAI, Philadelphia, PA, 1986. ~ah11987] Deborah A. DaM, Determ~-ers, Entitles, and Contexts, Presented at TInlap°3, Las Cruces, New Mexico, January 7°9, 1987. ~11more1968] C. J. F;nmore, The Case for Case. In Uni, ersal, in Linguimtie Theory, E. Bach and R. T. Harms (ed.), Holt, Rinehart, and Winston, New York, 1968. ~ininZO80] Tim Finin, The Semantic Interpretation of Compound Nominals, PhD Thesis, University of Tlll,ois at Urbana- Champaign, 1980. [I-Iirschman1985] L. Hirschman and K. Puder, Restriction Grammar: A Prolog Implementation. In Lo¢ie Pro¢ramminff and ira Applica- tion,, D~-I.D. Warren and M. VanCaneghem (ed.), 1985. ~'lirschman1986] L. H~rschman, Conjunction in Meta- Restriction Grammar. d. of Loglc Pro- grumminq, 1986. ~eonard1984] Rosemary Leonard, The Interpretation of En¢limh Noun Sequeneem on the Computer. North Holland, Amsterdam, 1984. [Palmer1981] Martha S. Palmer, A Case for Rule Driven Semantic Processing. Proc. o/ the 19th ACL Conference, June, 1981. ~almer1985] Martha S. Palmer, Driving Semantics for a Limited Domain, Ph.D. thesis, University of Edinburgh, 198,5. ~almer1988] Martha S. Palmer, Deborah A. Dahl, Rebecca J. ~assonneau] Sch~man, Lynette Hirschman, Marcia Linebarger, and John Dowding, Recovering Implicit Information, Presented at the 24th An- nual Meeting of the Association for Computational Linguistics, Columbia University, New York, August 1986. ~assonneau1988] o Rebecca J. Passonneau, A Computa- tional Model of the Semantics of Tense and Aspect, Loglc-Based Systems Technical Memo No. 43, Paoli Research Center, System Development Corporation, November, 1986. ~assonneau198~ Rebecca J. Passonneau, Designing Lexi- cal Entries for a Limited Domain, Loglc-Based Systems Technical Memo No. 42, Paoli Research Center, System Development Corporation, April, 1988. 139
1987
19
A COMPOSITIONAL SEMANTICS OF TEMPORAL EXPRESSIONS IN ENGLISH Erhard W. Hinrichs BBN Laboratories Inc. 10 Moulton St. Cambridge, MA 02238 Abstract This paper describes a compositional semantics for temporal expressions as part of the meaning representation language (MRL) of the JANUS system, a natural language understanding and generation sys- tem under joint development by BBN Laboratoires and the Information Sciences Institute. 1 The analysis is based on a higher order intansional logic described in detail in Hinrichs, Ayuso and Scha (1987). Tem- poral expressions of English are translated into this language as quantifiers over times which bind tem- poral indices on predicates. The semantic evaluation of time-dependent predicates is defined relative to a set of discourse contexts, which, following Reichen- bach (1947), include the parameters of speech time and reference time. The resulting context-dependent and multi-indexed interpretation of temporal expres- sions solves a set of well-known problems that arise when traditional systems of tense logic are applied to natural language semantics. Based on the principle of rule-to-rule translation, the compositional nature of the analysis provides a straightforward and well- defined interface between the parsing component and the semantic interpretation component of JANUS. 1 Introduction JANUS is a natural language understanding and generation system which allows the user to interface with several knowledge bases maintained by the US NAVY. The knowledge bases contain, among other things, information about the deployment schedules, locations and readiness conditions of the ships in the Pacific Reet. (1) a. Did the admiral deploy the ship? b. Which C3 ships are now C4? c. When will Vincent arrive in Hawaii? d. Who was Frederick's previous commander? As the sample queries in (1) demonstrate, much of IThe work presented here was supported under DARPA contract #N00014-85-C-0016. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or of the United States Government. this information is highly time-dependant: Ships change locations in accordance with their deployment schedules, incur equipment failures or undergo per- sonnel changes which can lead to changes in the ship's readiness rating. It is, therefore, imperative that at the level of semantic representation of the natural language input an adequate analysis can be provided for those linguistic expressions that carry time infor- mation, for example, tenses, temporal adverbials and temporal adjectives. 2 Applying Classical Tense Logic To Natural Language Semantics My own treatment of temporal expressions is very much a response to the kinds of analyses that have been provided in classical tense logic. When I refer to classical tense logic I mean the kinds of logics that orginate in the work of the logician Arthur Prior (Prior 1967) and that have been applied by Montague (Montague 1973) and others to natural language semantics. In classical tense logic time-dependency of infor- mation enters into the definition of the notion of a proposition. Propositions are defined as functions from a set of times TI to the set of truth values true and false. Declarative sentences of natural language are taken to express propositions. The sentence It is raining can be taken to be that proposition which yields the value true for those times at which it is raining and false for those at which it is not. Tense operators can be defined in such a logic as in (2) and (3). (2) defines a past operator capital P which, applied to a proposition p, yields the value true for some time t if the proposition p is true at some time t' prior to t. Likewise, (3) defines a Y operator, where Y is mnemonic for yesterday, with the expected truth conditions: Yp is true at t if p is true at some time t' that falls within the prior to the day in which t falls. (2) [P P]t =' T iff [P]r = T for some time t' < t. (3) [Y Pit = Tiff [P]r = T for some time t' ¢ [DAY(t) - 1]. All of this sounds rather plausible. However, it turns out that if one tries to apply tense operators such as P and Y in natural language semantics, a set of well- known problems arise. 2 3 Inadequacies Of Classical Tense Logic 3.1 Interaction of Tense and Time Adverbials The first such problem, which I pointed out in Hin- richs (1981) and which has been independently noted by Dowty (1982), concerns the interaction between tense and time adverbials. If for sentence (4), one interprets the past tense in (4) by the P operator and the adverbial yesterday by the Y operator, then one of the two operators has to have scope over the other. (4) Vincent left yesterday. (5) P [ Y [leave' (Vincent') ] ] (6) Y [ P [leave' (Vincent') ] ] However, neither the formula in (5), nor the one in (6) gives adequate truth conditions for (4). In (5) the P operator shifts the temporal evaluation of the proposi- tion Y[leave'(Vincent')] from the speech time to some past time t' and then the Y operator shifts evaluation to some time t" within the day prior to t', instead of the day prior to the speech time. (6) assigns wrong truth conditions as well. Here the Y operator shifts evalua- tion to some time within the day prior to the speech time. But then the P operator in turn shift evaluation to some time prior to that, but necessarily within the same day. 3.2 Interaction of Tense and Negation Similar problems arise when one uses standard tense logic for sentences in which tense interacts with sentence negation as in (7). As was first pointed out by Partee (1973), one can assign the past tense operator P either narrow scope with respect to nega- tion as in (8) or wide scope as in (9). (7) Vincent did not leave. (8) ~ [ P [leave' (Vincent') ]] (9) P [-~ [ leave'(Vincent') ]] However, neither the formula in (8), nor the one in (9) assigns adequate truth conditions to (7). Formula (8) says that there exists no time in the past at which the proposition is true, clearly not capturing the meaning of (7). (9) makes (7) true if at any time in the past =In fairness to Prior, it has to be pointed out that he designed his temporal mo0al logics as purely formal systems anti did not design them w~ idea of applying them to natural language. However, Priorean tense logic has, nonetheless, been applied to natural language semantics. It is those studies that are subject to the criticisms presented in sections 3.1 - 3.4 Vincent did not leave. Given that ships participate in events other than arrivals at some point during their existence, (9) will be trivially satified, but does not capture adequately the truth conditions of (7). 3.3 Tense and Quantified NP The third type of inadequacy of standard tense logic has to do with the interaction of tense and quan- tified NP's and was first pointed out by Enc (1981). Enc points out that Priorean tense operators fail to capture certain readings of sentences such as (10). (10) Every admiral was (once) a cadet. (1 1) V x [ admiral'(x) --, P [ cadet'(x) ]] (12) P [ ~" x [ admiral'(x) --~ cadet'(x) ]] Since the past tense operator P is a propositional operator, it can take scope over the consequent of the material implication in (11). (11) represents the read- ing that everyone who is an admiral now was a cadet at some time in the past. The second reading in (12), where P has scope over the entire formula assigns the somewhat absurd truth conditions that at some time in the past every admiral as simultaneously a cadet. However, as Enc observes correctly, with propositional tense operators one cannot obtain the perfectly natural reading that everyone who is an ad- miral now or who was an admiral at some time in the past was a cadet at some time prior to being an ad- miral. 3.4 Temporal Anaphora There is fourth problem that arises when one uses tense operators of standard tense logic for the seman- tic interpretation of single sentences or pieces of dis- course that describe multiple events. (13) Vincent was I~it by a harpoon, was aban- doned by its crew, and sank. The most natural interpretation of (13) is one in which the events are understood to have happened in the same temporal order as they are sequenced in the sentence. However, if one uses a Priorean P operator to interpret each occurrence of the past tense in (13), one arrives at an interpretation, which incorrectly allows for any temporal ordering. 4 A Tense Logic with Multiple Indices It turns out that most of the problems that I have just discussed can be solved if one recognizes more than one parameter of temporal evaluation. In the models given to tense logics such as the ones first 9 developed by Prior, one standardly evaluates proposi- tions with respect to a single time which one may call the event time, the time at which an event happens or at which a state of affairs obtains. The point of speech is taken to be a special case of this parameter. An alternative to models with only one temporal parameter has been given by Reichenbach (1947). Reichenbach argues for distinguishing between three parameters which he calls speech time, event time and reference time. The meaning of the first two parameters should be self-explanatory. It is the third parameter, reference time, that requires explanation. Reichenbach conceives of reference time as the tem- poral perspective from which an event is viewed, as opposed to event time as the time at which the event occurs. Reference time can be either implicit in the discourse context or explicitly specified by temporal adverbials such as yesterday. For each individual tense reference time is temporally ordered with respect to the other two parameters. Reference time plays a crucial role in Reichenbach's account of the distinction betwen the simple past and the present perfect in English. In both cases event time preceeds speech time. But while for the simple past, the event time is viewed from a perspective in the past, the event is viewed from the perspective of the present in the case of the present perfect. Given the distinction between reference time and event time, one can then formalize Reichenbach's analysis of the past tense as in (14). The operator P shifts evaluation of the event time t to some time t' in the past such that t' falls within some reference time r. (14) [P P]r,t = Tiff [P]r,r for some time t' such that t' < t and t' ~; r. The Y operator on the other hand, does not shift the event time t, rather it operates on the reference time r in the obvious way. 3 (15) ~/P]r,t == Tiff [P][DAY(t=)-I],t = T. With the redefined operators P and Y, one can now give adequate truth conditions for sentences involving tense and time adverbials. In the formula in (16) Y specifies the reference time r to be the day prior to the speech time, and then the P operator locates the event time as being within that reference time. (16) [Y [ P [ leave' (Vincent') ] ]r,t = T iff [ P [leave' (Vincent') ]][DAY(t=)-I].t == T iff [ leave' (Vincent') ]][OAY(t ).l],t' == T for some t'< t and t'~; [DAY(ts)-I ]. Likewise for tense and negation, the past operator locates the event time t prior to speech time and within some reference time r which in the case of (17) has to be taken to be contextually specified. "=Operators similar to the redefined P and Y operators have first been suggested in the literature by Acquist (1976). (17) Vincent did not leave. (18) [7 [P [leave'(Vincent')]]]r,t = T iff [ P[leave'(Vincent')]]r, t =, F iff [leave'(Vincent') ]r,r = F for all times t' such that t' < t and t' <;; r. (17) is true according to (18) if there is no time within the reference time r at which the untensed proposition /eave'(Vincent') is true. It turns out that a multi-indexed tense logic also gives an adequate account of tense in discourse. A detailed account of this can be found in Hinrichs (1981, 1986); here I will only sketch the basic idea: By ordering event times with respect to reference times, as sketched in (20), and by updating such ref- erence times after each event description, one can order multiple events as described in (19) in the ap- propriate way. The relations < and ~; in (20) are meant to stand for temporal precedence and temporal inclusion, respectively. (19) Vincent [was hit by a harpoon]%, [was aban- doned by its crew]e =, and [sank]%. (20) r 1 < r 2 < r 3 ul Ul Ul • I • 2 • 3 Let us consider next two alternative logical representations for sentence (21) in such a multi= indexed logic. (21) Vincent left yesterday. (22) [Y [ P [leave' (Vincent') ] ] ]r,t (23) 3 t' [t' < t s & t r - [DAY(ts) - 1] & t' ¢ t r & leave'(Vincent')(t') ] The one in (22) I have already discussed. In (22) past tense is translated into a propositional operator whose semantics is implicit in the truth conditions imposed with respect to the model-theory. In the formula in (23) the past tense leads to existerltial quantification over times. The existential quantifier binds variables which appear as extra argument positions on predi- cates. So, ship" which is ordinarily taken to be a one-place predicates turns into a two-place predicate that takes individuals and times as its arguments. The variable t r occurs as a free variable in (23) and stands for the Reichenbachean reference time. Although the two formulas in (22) and (23) are logically equivalent in the sense that both are true under the same set of models, I will adopt the style of logical representation in (23) for remainder of this paper This is because in the context of the JANUS system, it is important to explicitly quantify over times since in the database times are explicitly entered as dates, time stamps, etc. In order to be able to access them, it is important to incorporate time information explicitly at the level of logical form. A second reason for preferring the style of 10 representation in (23) over the one in (22) concerns the interaction between tenses and quantified NP's. Since formulas such as (23) explicitly quantify over times, scope relations with respect to quantification over individuals become completely transparent. 5 Tense and Quantified Noun Phrases Using the style of representation exemplified by formula (23), let me then return to the issue of tense and quantification, which is still unresolved. Consider once again the types of examples that, as Enc points out, cannot be handled in standard tense logic. (24) Every admiral was (once) a cadet. (25) V x [ admirar(x) ---> P [ cadet'(x) ]] (26) P [ ~" x [ admiral'(x) --e cadet'(x) ]] If tense operators like P have scope over proposi- tions, P can either scope over an entire formula as in (25) or over the consequent of the material implication as in (26). Now, as we saw earlier, neither formula captures the reading that all present or past admirals were cadets prior to their being admirals. Enc (1981) provides an interesting solution to the problem posed by examples such as (24). Her solu- tion is based on two assumptions: 1. Semantically, tenses should have scope only over verb meanings, but not over any larger elements in a sentence, and 2. verb meanings as well as noun meanings are indexi- cal in the sense their interpretations depend on the context of the utterance in the same way that demonstrative pronouns such as that and anaphoric pronouns such as she and they do. As the formula in (27) shows, which represents the translation for (24) in my analysis, I adopt Enc's first assumption and assign tense scope only over the main verb of the sentence. (27) V x [ 3 t [ admiral'(x)(t) & R (x)(t) ] --~ [ :1 t' [ t' < t s & t' ~ t r & graduate-from'(West- Point')(x)(t') ]] The predicate R in (27), whose role I will comment on in more detail shortly, is meant to range over properties which are salient in a given context. The past tense of sentence (24) contributes the existential quantification over times t' that precede the speech point t s and are contained in some contextually specified reference time t r. Following Enc, tense is thus given scope only over the predicate that cor- responds to the main verb. However, the formula in (27) also shows that I do not follow Enc in her second assumption, namely her treatment of nouns as indexi- cals. In contrast to true indexicals, whose denotation depends solely on the context of utterance, I treat the denotation of predicates corresponding to nouns as being time-dependent in an absolute sense, since predicates such as admira/do carry a time-denoting argument position as part of their function-argument structure. Without such an argument, it seems impos- sible to give a satisfactory account of temporal adjec- tives such as former and previous or/ast, whose func- tion it is to shift the temporal evaluation of the predi- cate that they combine with. However, I do recognize an element of context dependency inherent in the in- terpretation of noun phrases such as every admiral since I interpret such noun phrases with respect to some contextually salient property R. This predicate makes it possible to account for the well-known phenomenon of restricted quantification, namely that in sentences such as (28) the interpretation of everyone does not involve the set of all students in the world, but rather the set of all individuals in a given context; for example everyone at a certain party. 4 (28) Everyone is having a good time. Temporal evaluation of the verbal predicate is, thus, kept separate from the temporal evaluation of predi- cates corresponding to other constituents in the sen- tence. As first pointed out by Enc, this strategy makes it possible to account for sentences such as (29) and (30) whose translations require that the predicates secretary and fugitive be evaluated relative to a time which is distinct from the evaluation time of the predicate corresponding to the verb. s (2g) Oliver North's secretary testified before the committee. (30) Every fugitive is now in jail. In contrast to an analysis which interprets the past tense in terms of a Priorean P operator, the narrow scope analysis of tense also avoids the dilemma of inducing a simultaneity reading for sentence (31), if the tense operator P has scope over the entire for- mula as in the translation (32) of (31). (31) Every admiral graduated from West Point. (32) P [ 'd x [admiral'(x) ~ graduate-from'(West- Point')(x)]] The reading in (32) is factually implausible for two reasons: 1. It imposes simultaneity as part of the truth conditions and requires that all admirals graduated at the same time, 2. since the P operator forces tem- poral evaluation of all predicates in its scope at the same index, in the case of (31) it requires that every admiral graduated from West Point as an admiral, and not, as is actually the case, subsequent to graduation from the Naval academy. Notice that the formula in (33) , which represents the translation of (31) in my analysis, avoids both problems associated with (32). (33) ~' x [ 3 t [ admiral'(x)(t) & R (x)(t) ] --~ [ 3 t' [ t' < t s & t' s t r & graduate-from'(West- Point')(x)(t') ]] 4The example is due to Stalnaker (1973). SRecail that Fawn Hall, North's secretary, testified before the committee when she was no longer North's secretary. The example is due to an editorial in the Boston Globe 11 Since temporal evaluation of the predicates admiral' and graduate-from" are kept separate, the first problem does not arise. Since the predicates are existentially quantified over independently, (33), in contrast to (32), also avoids having to assign a simul- taneity reading to (31). A crucial element of my analysis is the inclusion of the predicate R, which is meant to restrict the denota- tion of quantified NP's such as every ship by properties that are salient in the context of utterance. Apart from keeping the temporal evaluation of verbal predicates and nominal predicates independent of one another, it is this context dependent feature of my analysis that makes it more flexible than a wide scope analysis of tense. Let me illustrate how the context- dependent evaluation of quantified NP's by once again focusing on example (34). (34) Every admiral graduated from West Point. Imagine that (34) is uttered in a context in which all current admirals assigned to the Pacific Fleet are un- der discussion. In that context, R could be instan- tiated as in (35), i.e. as the intension of the set of individuals x which are assigned to the Pacific Fleet at a time which equals the speech time t s. (35) ;Lt ~.y [assigned-to'(Pac.Fleet')(y)(t) & t = ts] Substituting R by (35) in (36), one then arrives at the formula in (37). (36) V x [ :1 t [ admirar(x)(t) & R(x)(t) ] --+ [ 3 t' [ t' < t s & t' e t r & graduate-from'(West- Point')(x)(t') ]] (37) V x [ 3 t [ admiral'(x)(t) & assigned-to'(Pac- Fleet')(x)(t) & t = t s ] ~ [ =1 t' [ t' < t s & t' ¢ t r & graduate-from'(West-Point')(x)(t') ]] In a context in which all present or past admirals in the Pacific Fleet are under discussion, a reading which, as I pointed out in section 3.3, one cannot capture using Priorean tense operators one can cap- ture by instantiating R as in (38), where < stands for the relation temporally preceding or equaJ to. (38) ~.t ~.y [assigned-to'(Pac-Fleet')(y)(t) & t < ts] The idea behind using the variable R in my analysis is, thus, to have it instantiated appropriately by the discourse context. One of the counterarguments that one may raise against this context-dependent aspect of my analysis of temporal semantics concerns the fact that tracking the salience of objects and their properties in natural language discourse is a notoriously difficult problem. However, I will argue in the next section that whatever mechanisms are needed to track saliency, such mechanisms are motivated independently by semantic and pragmatic phenomena that go beyond phenomenon of temporal interpretation. 6 Evaluating Time-dependent Predicates in Context Objects and certain of their properties can receive or maintain salience in a discourse in any number of ways. The notions of focus (Sidner 1983), of common ground (Stalnaker 1978) and of mutual knowledge (Clark and Marshall 1981) are certainly cases in point. In this section I will concentrate on one such mechanism which plays a role in the context- dependent interpretation of time dependent predi- cates. I will argue that the mechanism is needed for purposes other than temporal interpretation and, therefore, does not add complexity to my analysis of temporal semantics. Consider a typical sequence of queries that a user may present to JANUS. (39) a. Did every admiral deploy a ship yesterday? b. Which ships will arrive in Hawaii? The person asking (39b) is not interested in being informed about all ships that at some time in the fu- ture will go to Hawaii. Instead, the user is interested in a much more restricted set of ships that will go there, namely the ones that were deployed by some admiral the day before. In order to arrive at such an interpretation, the free variable R in the translation formula in (40) has to be bound appropriately by the context. (40) QUERY [ Z z [ z ~ POW[Z y 3 t' [ ship'(y)(t') & R(y)(t')]] & =1 t [ t > t s & t ~ t r & go-to'(Hawaii')(z)(t) ]4 ] QUERY is a speech act operator which takes the propositional content of the question as an argument and causes to evaluate it at some temporal index, in this case the point of speech t s. In (40) QUERY ap- plies to a lambda-abstract over those sets of objects x which are the speech time t s in the Indian Ocean and whose members y at some time t have the property of being a ship and which are in addition distinguished by some contextually salient property R. POW stands for the power set operation which I use for the inter- petation of plural nouns. Now if the reader prefers some other approach to the semantics of plurals, say the lattice-theoretic approach of link (1983), over the approach based on power sets I am not going to ar- gue with them. The point that I want to concentrate on with respect to the formula in (40) concerns the instantiation of the context-dependent predicate R. The predicate ship' has to be interpreted relative to the discourse context, and the temporal evaluation of the predicate is determined with respect to that con- text, rather by the tense of the sentence, in this case the future. It turns out that a detailed proposal for how to track objects and their properties does, in fact, already exist in the literature. In her work on the interpretation 12 of pronouns in discourse, Webber (1978,1983) has developed a framework that constructs during the in- terpretation of a discourse a context which consists of a set of what she calls discourse entities. These dis- course entities then become available as objects that pronouns can refer to. One of the examples that Webber discusses is the interpretation of the pronoun they in (42) in the context of sentence (41). (41) Every admiral deployed a ship yesterday. (42) They arrived. Clearly they refers to the set of ships deployed by some admiral. What is interesting, of course, about the example is that syntactically there is no plural noun phrase in the preceding discourse that could serve as the referent for the plural pronoun they. In order to derive the appropriate discourse entity for the interpretation of they, Webber suggests the rule schema as in (43). (43) says that for any formula that meets this structural description, a discourse en- tity identified by this formula is to be constTucted. (43) SD: V Y1"''¥k 3 x [P --~ Q] ID: k x 3 YI"''Yk [P & Q] Instantiated for sentence (41) and its translation (44), the rule produces the expression in (457. (44) V x ":1 y,t,t',t" [ admirar(x)(t) & Rl(X)(t ) ship'(y)(t') & R2(Y)(t' ) & t r = [DAY(ts)-I ] & t" s t r & deploy'(y)(x)(t') ] (45) Z y =J x,t,t',t" [ ship'(y)(t) & R2(Y)(t ) & admiral'(x)(t') & Rl(x)(t' ) & t r = [DAY(ts)-I ] & t" ¢ tr & deploy'(y)(x)(t') ]] (45) denotes the set of ships that have been deployed by some admiral. This discourse entity with that description then becomes available for the interpreta- tion of the pronoun they. It turns out that the method of constructing dis- course entities is not only relevant for the interpreta- tion of pronouns, but also for the contextual interpreta- tion of nouns and noun phrases that I am concerned with here. The discourse entity with the description in (45) cannot only serve for interpreting pronouns, but also for instantiating the contextually specified variable R for the interpretation of the noun ship in (46b) in the context of (46a). (46) a. Did every admiral deploy a ship yesterday? b. Which ships will arrive in Hawaii? Since the discourse entity in (457, which ranges over a set of ships, is described in terms of the property of having been deployed by some admiral the day prior to the day of the speech point, that property can be taken to be salient in the discourse context. If one substitutes the context variable R in the translation (47) of (46b) by this contextually salient property, the temporal evaluation of the predicate ship' in the result- ing formula (48) is no longer governed by the existen- tial quantifier t for the future tense, but rather by the quantifier t' introduced by the contextually salient property. As a consequence of this instantiation of the context variable R, the set of ships under con- sideration is restricted in the appropriate way. which are assumed to be bound by the discourse context. (47) QUERY [ ;L z [ z ¢ POW[A y 3 t' [ ship'(y)(t') & R(y)(t')]] & 3 t [t > t s & t ~ t r & go-to'(Hawaii')(z)(t) ]4 ] (48) QUERY [ X z [ z s POW[X y 3 t' [ ship'(y)(t') & =J x,t',t'" [ admiral'(x)(t') & Rl(x)(t") & t r = [DAY(ts)-I ] & t"' ¢ t r & deploy'(y)(x)(t"') 1] & =1 t [ t • t s & t s t' r & go-to'(Hawaii')(z)(t) ]4 ] Notice that (48) contains two reference time parameters t r and t' r, which are associated with quan- tifiers ranging over past and future times, respectively. I am assuming here that each tense has associated reference time which is updated during discourse processing. 6 The mechanism for deriving contextually salient properties which are introduced through the previous linguistic discourse may strike the reader as rather complicated in detail. However, as I have argued in this sec~on, tracking such properties is important not only for temporal evaluation, but is independently motivated by other discourse phenomena such as anaphoric reference, as Webber (1978,1983) has convincingly shown. 7 A Compositional Syntax and Semantics of Tense In the previous sections I have focused on the semantic and pragmatic aspects of my analysis of temporal expressions, that concern in particular the feature of narrow scope assignment of tense and the feature of context-dependent interpretation of quan- tified NP's. In this section I will concentrate on mat- ters of syntax and will demonstrate how the narrow scope analysis of tense makes it possible to construct a straightforward compositional syntax and semantics of temporal expressions. Syntactically tenses in English appear as inflec- tional morphemes on verbs. In the notation of categorial grammar, I assign a syntactic tree as in (50) to sentence (49). The untensed form of the verb arr/ve of category IV is combined with the past tense morpheme -ed to form a tensed intransitive verb IV*. Morpho-syntactically, tenses are therefore items that apply to individual words. (49) Every ship arrived. eSee Hinrichs (1981) for more details on this point 13 (50) Zvez'lv =h£p a.c=:i.ved, S Zvez.'y shJ.p, 8/ZV* ="'¢~.ved, ZV* Zvez'y, S/ZV~/CN =b.i.p, CN =.~=:~.vo, ZV Since I assign tense narrow scope in the semantics and let temporal quantiflers bind only the temporal index associated with the main verb, I arrive at an analysis of tense where its syntactic domain coincides with its semantic domain. Compared to analyses in which tense is assigned wide scope over formulas which correspond to entire sentences (Montague 1973) or over entire verb phrases (Bach 1980), the narrow scope analysis, which I have developed in this paper, has the advantage of leading to a straightfor- ward compositional syntax and semantics of tense. In the syntax the tense morpheme turns an untensed verb into its tensed counterpart, while in the cor- responding translation rule tense has the effect of ex- istentially quantifying over the time-index of the predi- cate which translates the untensed verb. (51) $17. If c¢ s PIVPNP and then Fl1(c¢ ) s PIVPNP with F11 - c¢ -ed. (52) T17. If o. s PIVrNP and ¢¢ translates into c¢', then, then F 11 (c¢) translates into ~,S 1... Sn~.x[=]t'[t'<ts&t'¢t r& o¢'(S 1)...(Sn)(x)(t') ]. $17 is a rule schema which ranges over untensed intransitive verbs (IV), transitive verbs (IV/NP), ditran- sitive verbs (IV/NP/NP), etc. The notation IV/nNP, thus, stands for an IV followed by n slashed NP's. The corresponding translation schema T17 denotes a function from the type of meanings associated with object NP's, if any, to functions from individuals to truth values. Although these rule schemata are rather technical, their meaning should become clearer, when one considers a concrete example. Consider once again the example (53) whose syntax has been given in (50). (53) Every ship arrived. The translation of the entire sentence can be built up in a compositional fashion as in (54), which mirrors the syntactic composition of (50). (54) arrived translates as: K x [ =1 t' [ t' < t s & t' ¢ t r & arrive'(x)(t') ]] every translates as: KP;kQ V x [3 t [ P(x)(t) & R(x)(t) ] --, Q(x)] every ship translates as: ;LQ V x ~ t [ship'(x)(t) & R(x)(t) ] ..-, Q(x) ] Every ship arrived translates as: 1. ~.Q V x [3 t [ ship'(x)(t) & R(x)(t) ] --~ Q(x)] (K y [ 3 t' [ t' < t s & t' s t r & arrive'(y)(t') ]]) 2. V x [3 t [ ship'(x)(t) & R(x)(t) ] ~ K y [ =1 t' [ t' < t s & t' s t r & arrive'(y)(t') 1] (x) ] 3. V x [3 t [ ship'(x)(t) & R(x)(t) ] --~ =1 t' [ t' < t s & t' ¢ t r & arrive'(x)(t') ]] The phrase every ship is formed bY supplying the predicate ship' as an argument to the translation of every. Notice that the context-variable R is introduced by the translation of the quantifier every. The trans- lation of the entire sentence is formed by supplying the translation of the tensed verb arrived, which is produced by the translation T17, to the translation of the subject NP. The reduced translation results from two steps of lambda-reduction. 8 Conclusion In this paper I have argued that a logical seman- tics for temporal expressions can provide adequate representations for natural language input to an inter- face such as JANUS. The temporal logic is based on Reichenbach's models for the semantics of English tense and uses multiple indices for semantic inter- pretation. This multi-indexed logic overcomes the kinds of problems that arise when systems of tense logics are used that rely on just one index of evalua- tion. I have demonstrated how giving narrow scope to tense quantifiers enables us to provide adequate scope relations with respect to NP quantifiers and to interpret such NP's relative to a given discourse con- text. I have argued that the context-dependent fea- ture of the analysis does not add extra complexity to my treatment of time-dependent expressions, but is needed for purposes of discourse understanding in general. Finally, I have demonstrated how the narrow scope of tense results in a fully compositional syntax and semantics of tensed sentences in English. 9 Acknowledgements I am grateful to Remko Scha and Barry Schein for comments on earlier drafts of this paper. My in- debtedness to the work of Hans Reichenbach and Murvet Enc on matters of temporal semantics will be evident throughout the paper. 14 10 REFERENCES Aqvist, Bach, Clark, Lennart (1976). 'Formal Semantics for Verb Tenses as Analyzed by Reichenbach'. In: van Dijk, Teun ed. Pragmatics of Language and Literature. Amsterdam: North Holland, pp. 229-236. Emmon (1980). "Tenses and Aspect as Func- tions of Verb Phrases". In Ch. Rohrer ed. Times, Tenses, and Quantifiers. Niemeyer: Tuebingen, W. Germany. H. H. and Marshall, C.R. (1981) "Definite Refer- ence and Mutual Knowledge'. In: A. Joshi, B. Webber and I. Sag eds. Elements of Dis- course Understanding. Cambridge University Press: Cambridge, pp. 10-63. Dowty, David R. (1982). "Tenses, Time Adverbs, and Compositional Semantic Theory'. Linguistics and Philosophy Vol.5, pp. 23-55. Enc, Murvet (1981). Tense without Scope: A...nn Analysis of Nouns as Indexicals. University of Wisconsin, Madison dissertation. Distributed by IULC. Enc, Murvet (1986). "Towards a Referential Analysis of Temporal Expressions". Linguistics and Philosophy. Vol. 9.4, pp. 405-426. Hinrichs, Erhard (1981). Temporale Anaphora irn Encjlischen. unpublished Staatsexamen thesis: University of Tuebingen. Hinrichs, Erhard (1986). "Temporal Anaphora in Dis- courses of English". Linguistics an d Philosophy, Vol. 9.1, pp. 63-82. Hinrichs, Erhard, Damaris Ayuso and Remko Sha (1987). "The Syntax and Semantics of a Meaning Representation Language for JANUS'. In: Research and Development in Natural Language Understanding as Part of th...ee Strategic Computing Program, Annual Technical Report December 1985- December 1986, BBN Technical Report 6522. Link, Godehard (1983). 'The Logical Analysis of Plurals and Mass Terms'. In: Baeuerle, Schwat-ze and von Stechow eds. Meaning, Use and Interpretation of Language. Berlin: De Gruyter, pp. 250-269. Montague, Richard (1973). Formal Philosophy. ed. by Richmond Thomason. Yale University Press: New Haven. Prior, Arthur (1967). Past, Present and Future. Ox- ford: Oxford University Press. Partee, Barbara H. (1973). 'Some Structural Analogies between Tenses and Pronouns'. The Journal of Philosophy 70:18, pp. 601-609. Reichenbach, Hans (1947). Elements of Symbolic LocJic. Berkeley: University of California Press. Scha, Remko (1963). Logical Foundations for Ques- tion Answering. Philips Research Laboratories M.S. 12.331. Eindhoven, The Netherlands. Sidner, Candace (t983). "Focusing the Comprehen- sion of Definite Anaphora". In Brady, Michael and Robert Berwick eds. Computational Models of Discourse. Boston: MIT Press, pp. 267-330. Stalnaker, Robert (1973). "Pragmatics". In D. Davidson and G. Harman eds. Semantics of Natural Language. Reidel Publishing: Dordrecht, pp. 380-397. Stalnaker, Robert (1978). "Assertion". In: P. Cole ed. Syntax and Semantics Vol. 9. New York: Academic Press, pp. 315-332. Webber, Bonnie (1978). A Forma! Approach to Dis- course Anaphora. BBN Technical Report No. 3761. Bolt Beranek and Newman, Inc.: Cambridge, MA. Webber. Bonnie (1983). "So what can we talk about now?". Brady, Michael and Robert Berwick eds. Computational Models of Discourse. Boston: MIT Press, pp. 331-371 15
1987
2
TOWARD TREATING ENGLISH NOMINALS CORRECTLY Richard W. Sproat, Mark Y. Liberman Linguistics Department AT&T BeLl Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 Abstract We describe a program for assigning correct stress contours to nominals in English. It makes use of idiosyncratic knowledge about the stress behavior of various nominal types and general knowledge about English stress rules. We have also investigated the related issue of parsing complex nominals in English. The importance of this work and related research to the problem of text-to-speech is 'discussed. I. Introduction We will discuss the analysis of English expressions consisting of a head noun preceded by one or more open.class specifiers: rising prices, horse blanket, mushroom omelet, banana bread, parish priest, gurgle detector, quarterback sneak, blind spot, red herring, bachelor's degree, Planck's constant, Madison Avenue, Wall Street, Washington's birthday sale, error correction code logic, steel industry collective bargaining agreement, expensive toxic waste cleanup, windshield wiper blade replacement, computer communications network performance analysis primer, and so forth. For brevity, we will call such expressions 'nominals.' Our main aim is an algorithm for assigning stress patterns to such nominal expressions; we will also discuss methods for parsing them. Nominals are hard to parse, since their pre-terminal string is usually consistent with all possible constituent structures, so that we seem to need an analysis of the relative plausibility of the various meanings (Marcus, 1980; Finin, 1980). Even when the constituent structure is known (as trivially in the case of binary nominals), nominal stress patterns are hard to predict, and also seem to depend on meaning (Bolinger, 1972; Fudge, 1984; Selkirk, 1984). This is a serious problem for text-to-speech algorithms, since nominal expressions are common at the ends of phrases, and the location of a phrase's last accent has a large effect on its sound. Complex nominals are common in most kinds of text; for example, in the million words of the Brown Corpus (Francis and Ku~era, 1982), there are over 73,000 nominals containing more than two words. However, we have been able to make some progress on the problems of parsing and stress assignment for nominals in unrestricted text. This paper concentrates on the representation and use of knowledge relevant to the problem of assigning stress; this same knowledge turns out to be useful in parsing. For the purposes of this paper, we will be dealing with nominals in contexts where the default stress pattern is not shifted by phenomena such as as intonational focus or contrastive stress, exemplified below: (1) a. We're only interested in solvable problems. (words like only depend on stress to set their scope m otherwise, this nominal's main stress would be on its final word.) b. He's a lion-tamer, not a lion-hunter. (in a non-contrastive context, these nominals' main stresses would be on their penultimate words.) These interesting phenomena rarely I shift main phrase stress in expository text, and are 1. In our samples, only a fraction of a percent of complex nominals in phrase-final position have their main stress shifted by focus or contrast. 140 best seen as a modulation of the null- hypothesis stress patterns. We have argued elsewhere (Liberman and Sproat, 1987) for the following positions: (i) the syntax of modification is quite free m various modifiers of nominal heads (including adjectives, nouns, and possessives) may occur as sisters of any X-bar projection of the nominal head; (ii) modification at different X-bar levels expresses different types of meaning relations (see also Jackendoff, 1973); (iii) the English nominal system includes many special constructions that do not conform to the usual specifier-head patterns, such as complex names, time and date expressions, and so forth; (iv) the default stress pattern depends on the syntactic structure. Points (ii) and (iv) are common opinions in the linguistic literature. In particular, we support generative phonology's traditional view of phrasal stress rules, which is that structures of category N O have the pattern assigned by the compound stress rule, which makes left-hand subconstituents stress- dominant unless their right-hand sisters are lexically complex. 2 In simple binary cases, this amounts to left-hand stress. All other structures are (recursively) right stressed, according to what is called the nuclear stress rule. 3 Points (i) and (iii) are less commonplace. They make it impossible to predict stress from the preterminal string of a binary nominal, since the left-hand element may be attached at any bar level, or may be involved in some special construction. We do not have space to 2. Various authors (e.g. Liberman & Prince 1977, Hayes 1980) have suggested that the behavior of the compound stress rule, which in fact applies to compound nouns but not to compound adjectives or verbs, is related to the tendency of non-compound English nouns to have their main stress one syllable farther back than equivalent verbs or adjectives. This generalization strengthens the argument that IN N] constituents with left-hand stress are of parent category N O . 3. See Chomsky and Halle (1968), Liberman and Prince (1977), Hayes (1980) for various versions of these rules. argue here for this point of view, but some illustrative examples may help make our position clearer. Examples of adjectives and possessives within N O include sticky bun, black belt, safe house, straight edge, sick room, medical supplies, cashier's check, user's manual, chefs knife, Melzer's solution, etc. We can see that this is not simply a matter of non- compositional semantics by contrasting the stress pattern of red herring, blue moon, Irish stew, hard liquor, musical chairs, dealer's choice, Avogadro' s number, cat's pajamas. The N O status of e.g. user's manual can be seen by its stress pattern as well as its willingness to occur inside quantifiers and adjectives: three new user's manuals, but *three new John's books. In addition, there are several classes of possessive phrases that take right-hand stress but pattern distributionally like adjectives, i.e. occur at N l level, as in three Kirtland's Warblers. Examples of nouns at N 1 level include the common 'material-made-of' modifiers (such as steel bar, rubber boots, paper plate, beef burrito,), as well as most time and place modifiers (garage door, attic roof, village street, summer palace, spring cleaning, holiday cheer, weekend news), some types of modification by proper names (India ink, Tiffany lamp, Miami vice, Ming vase), and so on. Thus a stress-assignment algorithm must depend on meaning relationships between members of the nominal, as well as the collocational propensities of the words involved. We have written a program that performs fairly well at the task of assigning stress to nominals in unrestricted text. The input is a constituent structure for the nominal, and the output is a representation of its stress contour. Some examples of nominals to which the program assigns stress correctly are given in (2), where primary stress is marked by boldface and secondary stress by italics: 141 (2) [[Boston University] [Psychology Department]] [[[Tom Paine] Avenue] Blues] [corn flakes] [rice pudding] [apricot jam] [wood floor] [cotton shirt] [kitchen towel] [Philadelphia lawyer] [city employee] [valley floor] [afternoon sun] [evening primrose] [Easter bunny] [morning sickness] [[Staten Island ] Ferry] [South street] [baggage claim] [Mississippi Valley] [Buckingham Palace] [Surprise Lake] [Murray Hill] There are two main components to the program, the first of which deals almost exclusively with binary nominals and the second which takes n-ary nominals and figures out the stress pattern of those. We deal with each in turn. 2. Binary Nominals Much of the work in assigning stress to nominals in English involves figuring out what to do in the binary cases, and this section will discuss how various classes of binary (and some n-ary nominals, n>2) are handled. For example, to stress [[Boston University] [Psychology Department]] correctly it is necessary to know that Psychology Department is stressed on the left-hand member. Once that is known, the stress contour of the whole four-member nominal follows from general principles, which will be outlined in the subsequent section of this paper. To determine the stress pattern of a binary nominal, the following procedure is followed: 1. First of all, check to see if the nominal is listed as being one of those which is exceptionally stressed. For instance, our list of some 7000 left-stressed nominals includes [morning sickness], which will thus get left stress despite the general preference for right stress in nominals where the left-hand member is construed as as describing a location or time for the right-hand member. [Morning prayers], which follows the regular pattern, is stressed correctly by the program. Similarly, ['Easter Bunny] is listed as taking left stress whereas [Easter feast] is correctly stressed on the right. There is a common misconception to the effect that all and only the lexicalizcd (i.e. listed) nominal expressions arc left- stressed. This is false: lexicalization is neither a necessary nor a sufficient condition for left stress. Dog annihilator is left-stressed although not a member of the phrasal lexicon, and red herring is right-stressed although it must be lexically listed. Such examples abound (see, also, section 1). 2. If the nominal is not listed, check through all of the heuristic patterns that might fit it. A few examples of these patterns are given below m some of them are semantic or pragmatic in character, others are syntactic, and others are simply lexical. Note that there is not an easy boundary (for such an algorithm) between a pattern based on meaning and one based on word identity, since semantic classes correspond roughly to lists of words. MEASURE-PHRASE: the left-hand member describes a unit of measure in terms of which the right-hand member is valued. Examples: dollar bill, pint jug, S gallon tank... These normally take right stress. LOCATION-TIME-OR-SUBSTANCE: the left-hand member describes the location or time of the right-hand member, or else a substance out of which the right-hand member is made. Location examples: kitchen towel, downstairs bedroom, city hall... Time examples: Monday morning, Christmas Day, summer vacation... Substance examples: wood floor, china doll, iron maiden. These normally take right stress. ING-NOMINAL, AGENT-NOMINAL, DERIVED-NOMINAL: All of these are cases 142 where the right-hand member is a noun derived from a verb, either by affix .ing (sewing), -er (catcher) or some other affix (destruction). Nominals with these typically have left-hand stress if the left-hand member can be construed as a grammatical object of the verb contained in the right-hand member: dog catcher, baby sitting, automobile demolition. On the other hand if the left-hand member is a subject of the verb in the right- hand member then stress is usually right-hand: woman swimmer, child dancing, student demonstration. NOUN-NOUN: If both elements are nouns, and no other considerations intervene, left- hand stress occurs a majority of the time. Therefore a sort of default rule votes for left- hand stress when this pattern is matched. Examples of correct application include: dog house, opera buff, memory cache. Not much weight is given to this possibility, since something which is simply possibly a left- stressed noun-noun compound may be many other things as well. Complex typologies of the meaning relations in noun-noun compounds can be found in Lees (1960), Quirk et al. (1972), Levi (1978). These typologies cross-cut the stress regularities in odd ways, and are semantically rather inhomogeneous as well, so their usefulness is questionable. SELF: The left-hand member is the word self (e.g~, self promotion, self analysis...). Right- hand stress is invariably assigned, since self is anaphoric, hence destressed following the normal pattern for anaphors. PLACE-NAME: The right-hand member is a word like pond, mountain, avenue etc., and the left-hand member is plausibly a name. These cases get right-hand stress. Obviously, names ending in the word Street are an exception ([Madison Avenue] vs. [Wall Street]). All of the applicable patterns for a given nominal are collected. Each pattern has a weight. For instance, as noted above, little weight is given to the observation that a particular nominal may be a noun-noun compound, since the preterminal string [IN N] often belongs to categories that yield right- hand stress. On the other hand, if the analysis and its stress pattern are almost certain, as it is for sequences of the form [self N], then much weight is given to this pattern. The weights arc tallied up as 'votes' for assigning to one member or the other. The pattern with the most votes wins. Currently the weights are assigned in an ad hoe manner by hand; we plan to replace the manual weight assignment with the results of a statistical survey of nominal types in various forms of English. 3. Assigning Stress to N.Ary Nominals Given the stress pattern of binary cases, assigning stress to the general n-ary case is straightforward. The algorithm implemented is a version of one developed over the years by various researchers, including Chomsky and Halle (1968), Liberman and Prince (1977), Hayes (1980)~ Prince (1983) and others. Main stress is assigned to each level of constituent structure recursively, with relative stress values normally preserved as larger pieces of structure are considered. A convenient representation for tallying stress is the so- called 'metrical grid'; each word is associated with a set of marks or ticks on a grid whose higher, sparser levels correspond to metrically more important positions. For example, dog catcher would be represented as: (3) dog catcher The fact that dog has two ticks as opposed to the one tick assigned to catcher is indicative of the stress prominence of dog. When we combine two constituents together we upgrade the ticks of the highest tick-column of the weakest member to be the same as the highest column of the strongest member. For instance if we combine dog catcher with training school board meeting we will proceed by the following method: 143 (4) * * to , * tO tO tO tO dog catcher + training school board meeting dog catcher training school board meeting As a result, the most stressed element in each subunit starts out at 'tick parity' with the most stressed element in the other subunit. We then increment one of these main stresses to make it the main stress of the entire nominal: (5) dog catcher training school board meeting Finally the program tests for the applicability of the so-called Rhythm Rule. Given the rules so far, for a nominal such as City Hall parking lot we would expect the following stress contour: (6) to * City Hall parking lot However, the actual stress contour is: 4. As pointed out in Liberman (1975), such bottom-up recursive stress assignment algorithms can simply be thought of as the definition of a relation of relative prominence on all the sets of sister nodes in the tree. (7) City Hall parking lot The Rhythm Rule removes clashes between strong stresses by moving the left- hand stress back to the most prominent previous stress within the domain of the left- hand primary stress. 4. Performance of the Heuristic on 200 Binary Nominals. To get a rough idea of how well our program is doing, we took 200 [IN N] nominals from the Bell Labs News, and compared the performance of the current heuristic with two other procedures: (1) assigning stress uniformly to the right (which is what all current text-to-speech systems would do in such cases) and (2) assigning stress to the left if and only if the binary nominal can be analyzed as consisting of a noun followed by a noun. We had made no previous effort to develop heuristics appropriate for the content of this source material. The results were as follows: (8) (i) Assigning uniform rightward stress: 45% correct. (ii) Assigning leftward stress if N-N: 66%. (iii) Current program: 80%. Of our program's 40-odd failures, the cause was insufficient information in roughly 30 cases; only 10 were due to misanalysis. We classified the failure as being due to insufficient information when the program could say nothing about the categorization of either member of the compound, or could only ascertain that it might be dealing with a noun- noun sequence (which, the reader will recall, is given very little weight in making a decision). For instance, the program knows nothing about the stress properties of chemical terms, which invariably have right-hand stress, and therefore failed on gallium arsenide and several similar expressions. If the program had some information about at least one of the words, but still came up with the wrong 144 answer, then we classified the error as a case of misanalysis. The fact that most of the errors were due to insufficient information suggests that the program can be improved substantially by increasing its set of heuristic patterns and its knowledge of word classes. We guess that 90-95% correct stress is a plausible goal for t'N N] nominals, even in technical writing, where our experience suggests that readers will assign left-hand and right-hand stress to such constituents with about equal frequency. $. The Parsing Issue. Our stress assignment program assumes a parsed input, not a reasonable option for a working text-to-speech system. There is some practical value in correct stress assignment to binary nominals only, since they are commoner than longer ones in most kinds of text; in the Tagged Brown Corpus (Francis and KuSera, 1982) we found that roughly 80% of the complex nominals were binary, 15% were ternary, and that therefore only about 5% had more than three members. Still, a count of 15% for ternary nominals is significant. Furthermore, higher percentages for complex nominals with more than two members are expected for technical writing than are exhibited in the Brown Corpus. We have therefore also investigated the use of the stress-assignment heuristics in parsing nominal expressions of higher complexity than binary. How would such patterns be useful? Consider an expression like water supply control, to which we would want to assign the structure [[water supply] control]. Given that we assume binary branching, we have two options, namely [water [supply control]] and [[water supply] control]. While the fn'st analysis is not impossible, the second analysis would be favored since one of our patterns references the word supply, and lists substances such as water among the types of things that can have supplies. In effect, supply has a slot to its left which can optionally be filled by a noun referring to a substance or commodity of some kind, among which water is a prominent example. The word supply is not nearly so close to the core examples of likely arguments for control. Of course, listed complex nominals straightforwardly aid in parsing: a nominal such as City Hall parking lot is fairly easy to analyze given that in any case City Hall and parking lot are in our phrasal lexicon. It seems clear that substantial amounts of lexical knowledge are necessary to parse complex nominals. This comes as no surprise, in light of much recent linguistic work suggesting that a substantial portion of linguistic knowledge resides 'in the lexicon.' References Bolinger, D. 1972. Accent is Predictable (if you're a mind-reader). Language 48, 633-45. Chomsky, N. and M. Halle 1968. The Sound Pattern of English. New York: Harper and Row. Finin, T. 1980. The Semantic Interpretation of Compound Nominals. Doctoral dissertation, University of Illinois. Francis, W. N. and Ku~era, H. 1982. Frequency Analysis of English Usage: Lexicon and Grammar. Boston: Houghton Mifflin Company. Fudge, E. 1984. English Word-Stress. London and Boston: Allen and Unwin. Hayes, B. 1980. A Metrical Theory of Stress Rules. Doctoral dissertation, MIT, distributed by Indiana University Linguistics Club. lackendoff, R. X-bar Syntax: A Study of Phrase-Structure. Cambridge and London: MIT Press. Lees, R. 1960. The Grammar of English Nominalizations. Bloomington: Indiana University Press. Levi, J. 1978. The Syntax and Semantics of Complex Nominals. New York and London: Academic Press. Liberman, M. 1975. The Intonational System of English. Doctoral dissertation, MIT, reprinted 1979 by Garland, New York and London. Liberman, M. and A. Prince 1977. On Stress and Linguistic Rhythm. Linguistic Inquiry 8, 249-336. Liberman, M. and R. Sproat 1986. Stress Patterns in English Noun Phrases. Ms., AT&T Bell Labs. 145 Marcus, M. 1980. A Theory of Syntactic Recognition for Natural Language. Cambridge and London: MIT Press. Prince, A. 1983. Relating to the Grid. Linguistic Inquiry, 14, 19-100. Quirk, R., $. Greenbanm and G. Leech 1972. A Grammar of Contemporary English. London: Longman. Selkirk, E. 1984. Phonology and Syntax. Cambridge and London: MIT Press. 146
1987
20
THE INTERPRETATION OF TENSE IN DISCOURSE Bonnie Lynn Webber Department of Computer & Information Science University of Pennsylvania Philadelphia PA 19104-6389 Abstract This paper gives an account of the role tense plays in the listener's reconstruction of the events and situations a speaker has chosen to describe. Several new ideas are presented: (a) that tense is better viewed by analogy with definite NPs than with pronouns; (b) that a narrative has a temporal focus that grounds the context-dependency of tense; and (c) that focus management heuristics can be used to track the movement of temporal focus. 1 1. Introduction My basic premise is that in processing a narrative text, a listener is building up a representation of the speaker's view of the events and situations being described and of their relationship to one another. This representation, which I will call an eventJsituatlon structure or e/s structure, reflects the listener's best effort at interpreting the speaker's ordering of those events and situations in time and space. The listener's problem can therefore be viewed as that of establishing where in the evolving els structure to attach the event or situation described in the next clause. My claim is that the discourse interpretation of tense contributes to the solution of this problem. This work on the discourse interpretation of tense is being carried out in the context of a larger enterprise whose goal is an account of explicit anaphoric reference to events and situations, as in Example 1. Example 1 It's always been presumed that when the glaciers receded, the area got very hot. The Folsum men couldn't adapt, and they died out. That's what's supposed to have happened./t's the textbook dogma. But it's wrong. They were human and smart. They adapted their weapons and culture, and they survived. Example 1 shows that one may refer anaphorically to structured entities built up through multiple clauses. Thus an account of how clauses arrange themselves into structures is necessary to an account of event reference. 2 IThis work was papally supported by ARO grant DAA29-84og-0027, NSF grant MCS-8219116-CER, and DARPA grant N00014-85-K-0018 to the University of Pennsylvania, and by DARPA grant N00014-aS.-C-0012 to UNISYS. =Other parts of ~e entemrise include a ganeraJ mechanism for individuating composite entities made up of ones separately introduced I20, 21J and a representation for events that aJlow for anaphoric reference to both particular events and situations and to abstractions thereof [16], In this paper, I will relate the problem of building up an e/s structure to what has been described as the anaphoric property of tense [7, 11, 6, 1, 12] and of relative temporal adverbials[18]. Anaphora are expressions whose specification is context-dependent. Tense and relative temporal adverbials, I interpret as specifying positions in an evolving els structure. My view of their anaphoric nature is that the particular positions they can specify depend on the current context. And the current context only makes a few positions accessible. (This I will claim to be in contrast with the ability of temporal subordinate clauses and noun phrases (NPs) to direct the listener to any position in the evolving structure.) The paper is organized as follows: In Section 2, I discuss tense as an anaphoric device. Previous work in this area has discussed how tense is anaphoric, claiming as well that it is like a pronoun. While agreeing as to the source of the anaphoric character of tense, I do not think the analogy with pronouns has been productive. In contrast, I discuss what I believe to be a more productive analogy between tense and definite noun phrases. Previous work has focussed on the interpretation of tensed clauses in simple linear narratives (i.e., narratives in which the order of underlying events directly corresponds to their order of presentation). 3 Here the most perplexing question involves when the next clause in a sequence is interpreted as an event or sequence coincident with the previous one and when, as following the previous one [4, 6, 12]. In Section 3, I show that if one moves beyond simple linear narratives, there are more options. In terms of the framework proposed here, there may be more than one position in the evolving e/s structure which can provide a context for the interpretation of tense. Hence there may be more than one position in els structure which tense can specify and which the new event or situation can attach to. To model the possible contexts, I introduce a discourse-level focussing mechanism - temporal focus or TF - similar to that proposed for interpreting pronouns and definite NPs [17]. I give examples to show that change of TF is intimately bound up with narrative structure. To keep track of and predict its movement, I propose a set of focus heuristics: one Focus Maintenance Heuristic, predicting regular movement forward, two Embedded Discourse Heuristics for stacking the focus and embarking on an embedded narrative, and one Focus Resumption ZAnother persOn currently addressing the interpretation of tense and aspect in more complex narratives is Nakhimovsky I9, 10]. Though we are addressing somewhat different issues, his approach seems very compatible with this one. 147 Heuristic for returning and resuming the current narrative. The need for each of these is shown by example. In Section 4, I show that relative temporal adverbials display the same anaphoric property as simple tense. That the interpretation of tense should be entwined with discourse structure in this way should not come as a surprise, as a similar thing has been found true of other discourse anaphora [5]. 2. Tense as Anaphor Tense does not seem prima facie anaphoric: an isolated sentence like "John went to bed" or "1 met a man who looked like a basset hound = appears to make sense without previously establishing when it happened. On the other hand, if some time or event is established by the context, tense will invariably be interpreted with respect to it, as in: Example 2 After he finished his chores, John went to bed. John partied until 3arn. He came home and went to bed. In each case, John's going to bed is linked to an explictly mentioned time or event. This linkage is the anaphoric property of tense that previous authors have described. Hinrichs[6] and Bauerle[1], following McCawley [7] and Partee [11], showed that it is not tense per se that is interpreted anaphorically, but that part of tense called by Reichenbach [14] reference time. 4 According to Reichenbach, the interpretation of tense requires three notions: speech time (ST), event time lET), and reference time (RT). RT is the time from which the event/situation described in the sentence is viewed. It may be the same as ST, as in present perfect: ET<RT=ST John has climbed Aconcagua and Mt. McKinley. simple presenti ET=RT=ST John is in the lounge. the same as El', as in simple past: ET=RT<ST John climbed Aconcagua. simple future: ST<ET=RT John will climb Aconcagua. in between ET and ST, as in past perfect: ET<RT<ST John had climbed Aconcagua. or following both El" and ST (looking bac~ to them), as in f.uture perfect: ST<ET<RT John will have climbed Mt. McKinley. That it is RT that it is interpreted anaphorically, and not either El" or tense as a whole can be seen by considering Example 3. .Example 3 John went to the hospital. He had twisted his ankle on a patch of ice. It is not the El" of John's twisting his ankle that is interpreted anaphorically with respect to his going to the hospital. Rather, it is the RT of the second clause: its ET is interpreted as prior to that because the clause is in the past perfect tense (see above). Having said that it is the RT of tense whose interpretation is anaphoric, the next question to ask is what kind of anaphoric behavior it evinces. In previous work, tense is claimed to behave like a pronoun. Partee [12] makes the strongest case, claiming that pronouns and tense display the same range of antecedent-anaphor linkages: Oeictic Antecedents pro: She left reel (said by a man crying on the stoop) s tense: I left the oven onl (said by a man to his wife in the car) Indefinite Antecedents pro: I bought a banana. I took it home with me. tense: I bought a banana. I took it home with me. <1 took it home after I bought it.> Bound Variables pro: Every man thinks he is a genius. tense: Whenever Mary phoned, Sam was asleep. <Mary phoned at time t, Sam was asleep at t> Donkey Sentences pro: Every man who owns a donkey beats it. tense: Whenever Mary phoned on a Friday, Sam was asleep. <Mary phoned at time t on a Friday, Sam was asleep at t on that Friday> Because of this similarity, Partee and others have claimed that tense is like a pronoun. Their account of how time is then seen to advance in simple linear narratives is designed, in part, to get around the problem that while pronouns coospecify with their antecedents, the RT of clause N cannot just co-specify the same time as the previous clause [6, 12, 4]. There is another option though: one can draw an analogy between tense and definite NPs, which are also anaphoric. Support for this analogy is that, like a definite 4Hinrichs' work is discussed as well in [12l. Sl believe thai the deictic use of pronouns is infelicitous. In this example, the speake¢ is dis~'aught and making no attemp( to be cooperauve. It happens. But that doesn't mean thai pronouns have deictic antecedents. I include the example here because it is part of Partee's argument. 148 NP, tense can cause the listener to create something new. With a definite NP, that something new is a new discourse entity [19]. With tense, I will say for now that it is a new time at which the event or situation is interpreted as ocouring, s If one looks at texts other than simple linear narratives, this ability becomes clear, as the following simple example shows: Example 4 I was at Mary's house yesterday. We talked about her brother. He spent 5 weeks in Alaska with two fdends. Together, they made a successful assault on Denali. Mary was very proud of him. The event of Mary's brother spending five weeks in Alaska is not interpreted as occurring either coincident with or after the event of my conversation with Mary. Rather, the events corresponding to the embedded narrative in the third and fourth clause are interpreted at a different spatio- temporal location than the conversation. That it is before the conversation is a matter of world knowledge. In the els structure for the whole narrative, the tense of the third clause would set up a new position for the events of the embedded narrative, ordered prior to the current position, to site these events. The claimed analogy of tense with pronouns is based on the similarity in antecedent-anaphor linkages they display. But notice that definite NPs can display the same linkages in two different ways: (1) the definite NP can co- specify with its antecedent, as in the a. examples below, and (2) the definite NP can specify a new entity that is 'strongly' associated with the antecedent and is unique by virtue of that association, as in the b. examples below 7 Deictic Antecedents The car won't startl (said by a man crying on the stoop) Indefinite Antecedents a. I picked up a banana. Up close, I noticed the banana was too green to eat. b. I picked up a banana. The skin was all brown. Bound Variables a. Next to each car, the owner of the carwas sleeping soundly. b. In each car, the engine was idling quietly. Donkey Sentences a. Everyone who wants a car must fix the car himself. b. Everyone who owns a Ford tunes the engine himself. Thus the range of antecedent-anaphor behavior that Partee calls attention to argues equally for an analogy between tense and pronouns as for an analgoy between tense and definite NPs. eAfter I say more about Me structure construction, I will be able to claim that tense can cause the listener to create a new position in e/s structure at which to attach the event or situation described in its associated clause. 7Clark & Marshall [2] are among those who have described ~e necessary "common knowledge" that must be assumable by speaker and listener about the association for the spedfication to be successful. However, there are two more features of behavior to consider: On the one hand, as noted earlier, definite NPs have a capability that pronouns lack 8. That is, they can introduce a new entity into the discourse that is 'strongly' associated with the antecedent and is unique by virtue of that association, as in the b. examples above. Example 4 shows that tense has a similar ability. Thus, a stronger analogy can be drawn between tense and definite NPs. On the other hand, definite NPs have the capability to move the listener away from the current focus to a particular entity introduced earlier or a particular entity associated with it. This ability tense lacks. While tense can set up a new node in els structure that is strongly associated with its 'antecedent', it does not convey sufficient information to position that node precisely - for example, precisely relative to some other event or situation the listener has been told about. Thus its resemblance to definite NPs is only partial, although it is stronger-than its resemblance to pronouns. To locate a node precisely in e/s structure requires the full temporal correlate of a definite NP - that is, a temporal subordinate clause or a definite NP itself, as in Example 5. Example 5 The bus reached the Stadium, terminal for the suburban bus services. Here De Witt had to change to a streetcar. The wind had abated but the rain kept falling, almost vertically now. He was travelling to a two o'clock appointment at Amsterdam police headquarters in the center of town, and he was sure to be late. When De Witt got to the police president's office, he telephoned his house. [adapted from Hans Koning, De Witt's War] Notice that without the "when" clause, the simple past tense of "he telephoned his house" would be anaphorically interpreted with respect to the "reaching the Stadium" event, as happening sometime after that. A new node would be created in els structure ordered sometime after the "reaching the Stadium" event. On the other hand, with the "when" clause, that new node can be ordered more precisely after the "reaching the Stadium" event. By association with its "antecedent" (the "travelling to the appointment" event), it can be ordered after the achievement of that event. There is another advantage to be gained by pushing further the analogy between tense and definite NPs that relates to the problem tackled in [6, 4, 12] of how to reconcile the anaphoric nature of tense with the fact that the event or situation described in the next clause varies as to whether it is taken to be coincident with, during, before or after the event or situation described in the previous clause. This I will discuss in the next section, after introducing the notion of temporal focus. aexcept for "pronouns of laziness" which can evoke and specify new entities through the use of previous dascriptions 149 3. Temporal Focus In this section, I give a more specific account of how the discourse interpretation of tense relates to e/s structure construction. At any point N in the discourse, there is one node of e/s structure that provides a context for the interpretation of the RT of the next ctause. I will call it the temporal focus or TF. There are three possibilities: (1) the FIT of the next clause will be interpreted anaphorically against the current TF, (2) the "IF will shift to a different node of Ms structure- either one already in the structure or one created in recognition of an embedded narrative - and the RT interpreted with respect to that node, or (3) the "IF will return to the node previously labeUed TF, after completing an embedded narrative, as in (2), and the RT interpreted there, These three behaviors are described by four focus management heuristics described in this section: a Focus Maintenance Heuristic, two Embedded Discourse Heuristics and a Focus Resumption Heuristic. 9 In [21], I presented a control structure in which these heuristics were applied serially. The next heuristic would only be applied when the prediction of the previous one was rejected on grounds of "semantic or pragmatic inconsistency'. I now believe this is an unworkable hypothesis. Maintaining it requires (1) identifying grounds for such rejection and (2) arguing that one can reject proposals, independent of knowing the alternatives. I now don't believe that either can be done. It is rarely the case that one cannot come up with a story linking two events and/or Situations. Thus it would be impossible to reject a hypothesis on grounds of inconsistency. All one can say is that one of such stodes might be more plausible than the others by requiring, in some sense not explored here, fewer inferences. ~° Thus I would now describe these heuristics as running in parallel, with the most plausible prediction being the one that ends up updating both sis structure and the TF. For clarity in presentation though, I will introduce each heuristic separately, at the point that the next example calls for it. 3.1. Interpreting RT against "iF Before presenting the temporal focus management heuristics, I want to say a bit more about what it can mean to interpret the RT of the next clause against the current TF. This discussion points out the additional advantage to 9Rohrer [15] suggest= that ~ere may exist a set of possible temporal referents, possibly ordered by saliency, among which ~e tense in a sentence may find its reference time, but donsn't elaborate how. That is ~a only thing I have seen thin comes close to eta current proposal. l°Ccain arid Steedman [3] make a similar argument about prepositional phrase (PP) attachmenL For example, it is not impossible for a cat to own a telescope - e.g., by inheritance from its former owner. Thus "a ~ wi~ a telescope" is not art inconsistent description. However, it must compete with other plausible interpretations like "seeing wi~ a telescope" in "i saw == cat with a telescope'. be gained by pushing the analogy between tense and definite NPs. As I noted above, a definite NP can specify an entity 'strongly' associated with its antecedent. One might thus consider what is 'strongly' associated with an event. One answer to this question appears in two separate papers in this volume [8, 13], each ascribing a tripartite structure to the way we view and talk about events. This structure consists of a preparatory phase, a culmination, and a consequence phase, to use the terminology of [8]. (Such a structure is proposed, in part, to give a uniform account of how the interpretation of temporal adverbials interacts with the interpretation of tense and aspect.) Nodes in e/s structure correspond to events and situations, as the speaker conceives them. If one associates such a structure with the node labelled the currant TF, then one can say that 'strongly' associated with it are events and situations that could make up its preparatory phase, culmination or consequence phase. Like a definite NP, the RT of tense may either co-specify the current TF or set up a new node in e/s structure 'strongly' associated with the TF. In the latter case, its corresponding event or situation will be interpreted as being part of one of these three phases, depending on the speaker and listener's assumed shared knowledge. Since, arguably, the most common way of perceiving the wodd is as an ordered sequence of events, this will increase the plausibility of interpreting the next event or situation as (1) still associated with the current TF and (21 part of the consequence phase of that event (i.e., after it). On the other hand, this 'strong association' treatment no longer limits anaphorio interpretation to "co-specify" or "right after= as in [4, 6, 12]. The event described can be anaphorically associated with the the whole event structure (Example 6a), the consequence phase (Example 6b - "right after'), or the preparatory phase (Example 6c - "before'). Example 6 a. John walked across Iowa. He thought about Mary, who had run off with a computational linguist. b. John walked across Iowa. He crossed the state line at Council Bluffs and headed west through Nebraska. c. John walked across iowa. He started in Sioux City and headed east to Fort Dodge. Deciding which of these three options holds in a given case demands an appeal to world knowledge (e.g. which actions can be performed simultaneously by a single agent). This is yet another area demanding further study and is not treated in this paper. 11 11Mark Steedman shares responsibility for this idea, which is aJso mentioned in his paper wi~ Marc Moons in this volume [8]. 150 3.2. Focus Maintenance and Focus Movement The following pair of examples illustrate the simplest movement of temporal focus in a discourse and its link with e/s structure construction. Example 7a 1. John went over to Mary's house. 2. On the way, he had stopped by the flower shop for some roses. 3. Unfortunately the roses failed to cheer her up. Example To 1. John went over to Mary's house. 2. On the way, he had stopped by the flower shop for some roses. 3. He picked out 5 red ones, 3 white ones and one pale pink. Since the first two clauses are the same in these examples, I will explain them together. With no previous temporal focus (TF) established prior to clause 1, the listener creates a new node of e/s structure, ordered prior to now, to serve as TF. "IF sites the anaphoric interpretation of RT 1, which, because clause 1 is in the simple past, also sites ET 1. This is shown roughly in Figure 3-1. Figure 3-1: E/S structure after processing clause 1 The first heuristic to be introduced is a Focus Maintenance Heuristic (FMH). After interpreting dause N, the new TF is the most recent TF - i.e., the node against which RT N was interpreted. The most recent "IF is cotemporal with RT I. This new TF now provides a site for interpreting RT 2. Since clause 2 is past perfect, ET 2 is interpreted as being prior to RT 2. E/s structure is now roughly as shown in Figure 3-2. E't'~ ¢'.~z. s"~ E.~].. Flgure 3-2: E/S structure after processing clause 2 Applying the FMH again, RT 2 is the new TF going into clause 3. Examples 7a and 7b here diverge in what subsequently happens to the TF. In 7a, RT 3 can be anaphorically interpreted as immediately following the TF. Since RT 3 in turn directly sites ET 3 (clause 3 being simple past), the "failing event" is interpreted as immediately following the "going over to Mary's house • event. This is shown roughly in Figure 3-3. (TF is shown already moved forward by the FMH, ready for the interpretation of the next clause, if any.) nk Figure 3-3: E/S structure after processing clause 7a-3 To get the most plausible interpretation of 7b - i.e., where the "rose picking • event is interpreted anaphorically with respect to the "flower shop" event - requires a second heuristic, which I will call an Embedded Discourse Heuristic. This will be EDH-1, since I will introduce another Embedded Discourse Heuristic a bit later. If ET N is different from RTN='rF, treat utterance N as the beginning of an embedded narrative, reassign ET N to TF (stacking the previous value of TF, for possible resumption later) and try to interpret RTN+ 1 against this new TF. By this heuristic winning the plausibility stakes against the FMH, TF is reassigned to ET 2 (stacking the previous TF, which is sited at RT2=RT I=ET 1). and RT 3 is anaphorically interpreted as following this new TF. As before, ET 3 is sited directly at RT 3 (since simple past), so the "picking out the roses" event is viewed as immediately following the "stopping at the florist" event. This is shown roughly in Figure 3-4. . k~" Figure 3-4: E/S structure after processing clause 7b-3 Now consider the following extension to example 7b. Example 7c 1. John went over to Mary's house. 2. On the way, he had stopped by the flower shop for some roses. 3. He picked out 5 red ones, 3 white ones and one pale pink. 4. Unfortunately they failed to cheer her up. First notice that clauses 2-3 form an embedded narrative that interrupts the main narrative of John's visit to Mary's. The main sequence of events that begins with clause 1 resumes at clause 4. Now consider the anaphoric interpretation of tense. Clauses 1-3 are interpreted as in Example 7b (cf. Figure 3-4). The problem comes in the interpretation of Clause 7c-4. 151 To get the most plausible interpretation requires a third heuristic which I will call a Focus Resumption Heuristic (FRH). At the transition bade from an embedded nan'alive, the TF prior to the embedding (stacked by an Embedded Discourse Heuristic) can be resumed. Using this heuristic, the previously stacked TF (sited at RT2=RT1-ET 1 - the "going to Mary's house" event) becomes the new TF, and RT 4 is interpreted as directly following it. Since clause 7c-4 is simple past, the "failing" event is again correctly interpreted as immediately following the "going over to Mary's house" event. This is shown roughly in Figure 3-5. E~ I | ~ L ~F Figure 3-5: EJS structure after processing clause 7c-4 I have already noted that, like a definite NP, tense can cause the listener to create a new node in e/s structure to site its RT. What I want to consider here is the circumstances under which a reader is likely to create a new node of e/s structure to interpret RTN.I, rather than using an existing node (i.e., the current TF, one associated with the previous event (if not the TF) or a previous, stacked TF). One circumstance I mentioned earlier was at the beginning of a discourse: a reader will take an introductory sentence like Snoopy's famous first line It was a dark and stormy night. and start building up a new e/s structure with one node corresponding to ST and another node siting RT and ET, Generalizing this situation to the beginning of embedded narratives as well, I propose a second Embedded Discourse Heuristic (EDH-2): If clause N+t is interpreted as beginning an embedded narrative, create a new node of e/s structure and assign it to be TF. Stack the previous value of TF, for possible resumption later. EDH-2 differs from EDH-1 in being keyed by the new clause itself: there is no existing event node of els structure, different from the currant TF, which the embedded narrative is taken to further describe. EDH-2 explains what is happening in interpreting the third clause of Example 4. Even though all the clauses of Example 4 are simple past, with ET=RT, the third clause is most plausibly interpreted as describing an event which has ocoured prior to the *telling about her brother" event. EDH-2 provides the means of interpreting the tense in an embedded narrative whose events may occur either before or even after the current TF. Example 4 1. I was at Mary's house yesterday. 2. We talked about her brother. 3. He spent 5 weeks in Alaska with two friends. 4. Together, they made a successful assault on Denali. 5. Mary was very proud of him. Notice that the focus stacking specified in EDH-2 enables the correct interpretation of clause 4-5, which is most plausibly interpreted via the FRH as following the "telling about her brother" event. EDH-2 is also relevant for the interpretation of NPs headed by de-verbal nouns (such as "trip', "installation', etc.). While such a NP may describe an event or situation, there may not be enough information in the NP itself or in its clause to locate the event or situation in els structure (of. "my trip to Alaska" versus "my recent/upcoming trip to Alaska'). On the other hand, EDH-2 provides a way of allowing that information to come from the subsequent discourse. That is, if the following clause or NP can be interpreted as describing a particular event/situation, the original NP and the subsequent NP or clause can be taken as co-specifying the same thing. Roughly, that is how I propose treating cases such as the following variation of Example 4: Example 8 1. I was talking with Mary yesterday. 2. She told me about her trip to Alaska. 3. She spent five weeks there with two friends, and the three of them climbed Denali. The NP "her trip to Alaska" does not of itself cause an addition to e/s structure. 12 Rather, application of EDH-2 to the interpretation of clause 5-3 results in the creation of a new node of els structure against which its RT is sited. Other reasoning results in clause 3 and "her trip to Alaska" being taken as co-specifying the same event. This is what binds them together and associates "her trip to Alaska" with a node of e/s structure. Rnally, notice that there will be an ambiguity when more than heuristic makes a plausible prediction, as in the following example: Example 9 1. I told Frank about my meeting with Ira. 2. We talked about ordering a butterfly. It is plausible to take the second utterance as the beginning of an embedded narrative, whereby EDH-2 results in the "talking about" event being interpreted against a new node of els structure, situated prior to the "telling Frank" event. (In this case, "we" is Ira and me.) It is also plausible to take the second utterance as continuing the current narrative, whereby FMH results in the "talking about" event being interpreted with respect to the "telling Frank" event. (In contrast here, "we" is Frank and me.) 1=It does, of course, result in Re creation of a discourse entity [19]. The relationship I see between t~e listener's e/s structure and his'her dlacoume model is discussed in [21 ]. 152 4. Temporal Focus and Temporal Adverbials So far I have only shown that clauses containing no other time-related constructs than tense can be interpreted anaphorically against more than one site in ale structure. Now I want to show, at least by example, that what I have proposed holds for clauses containing relative temporal adverbs as well. Relative temporal adverbials must be interpreted with respect to some other time [18]. So consider the italicized forms in the following brief texts. John became the captain of Penn's squash team. He was previously captain of the Haverford team. John left for London on Sunday. Tuesday he went to Cambridge. Tuesday John went to Cambridge. On Sunday, he left for London. Previously is interpreted with respect to the previously mentioned "becoming captain" event: it was before that that he was captain at Haverford. In the second case, the adverbial On Sunday, given no previous link in the discourse, is interpreted with respect to ST. However, Tuesday is then interpreted with respect to the event of John's leaving for London: it is interpreted as the Tuesday after that event. The third case is the reverse. What I want to show is that, as before, the same four heuristics predict the sites in els structure that may provide a context for a relative temporal adverbial. Consider the following. Example 10a 1. John went over to Mary's house. 2. On the way, he had stopped by the flower shop for some roses. 3. After five minutes of awkwardness, he gave her the flowers Example 10b 1. John went over to Mary's house. 2. On the way, he had stopped by the flower shop for some roses. 3. After 20 minutes of waiting, he left with the bouquet and fairly ran to Mary's. I will use ADV to refer to the interpretation of the "after" adverbial. In these cases, what is sited by TF is the beginning of the interval. What in turn sites the RT of the main clause is the end of the interval. The processing of the first two clauses is just the same as in examples 7a and b. From here, the two examples diverge. In 10a-3, the beginning of ADV is most plausibly interpreted with respect to the TF. The end of ADV in turn provides an anaphoric interpretation point for RT 3. Since ET 3 is interpreted as coincident with RT 3 (clause 3 being simple past), the "rose giving" event is interpreted as immediately following John's getting to Mary's house. This is shown roughly in figure 4-1. Figure 4-1: E/S structure after processing clause 10a-3 In 10b-3, the interpretation due to FMH is less plausible than that due to EDH-I. EDH-1 re-assigns TF to ET2, where the beginning of ADV is then sited. The end of ADV in turn provides an anaphoric interpretation point for RT 3. Since ET 3 is sited at RT 3, the "leaving with the bouquet" event is sited at the end of the twenty minutes of waiting. This is shown roughly in Figure 4-2. ,.._.,_3 la¢>v "t'~" Figure 4-2: E/S structure after processing clause 10b-3 An interesting question to consider is whether a speaker would ever shift the TF as modelled by the FRH or the EDH-2, while simultaneously using a relative temporal adverbial whose interpretation would have to be linked to the new TF, as in example 11 (movement via FRH) and example 12 (movement via EDH-2). Example 11 1. John went over to Mary's house. 2. On the way, he had stopped by the flower shop for some roses 3. He picked out 5 red ones, 3 white ones and one pale pink. 4. After 5 minutes of awkwardness, he gave her the flowers. Example 12 1. I was at Mary's house yesterday. 2. We talked about her brother. 3. After 6 months of planning, he went to Alaska with two friends. 4. Together, they made a successful assault on Denali. 5. Mary was very proud of him. I find both examples a bit awkward, but nevertheless understandable. Accounting for TF movement in each of them is straightforward. However, whether to attribute the awkwardness of these examples to exceeding people's processing capabilities or to a problem with the theory is grist for further study. 153 5. Conclusion In this paper, I have given what I believe to be a credible account of the role that tense plays in the listener's reconstruction of the events and situations a speaker has chosen to describe. I have provided support for several new ideas: (a) that tense is better viewed by analogy with definite NPs than with pronouns; (b) that a narrative has a temporal focus that grounds the context- dependency of tense; and (¢) that focus management heuristics can be used to track the movement of temporal focus. I have also identified a host of problems that require further work, including (1) how to incorporate aspectual interpretation into the model, (2) how to evaluate 'strong associations' between events and/or situations and (3) how to judge plausibility. Acknowledgments I would like to extend my thanks to Debby Dahl, Martha Palmer and Becky Passonneau at UNISYS for their enthusiastic support and trenchant criticism. I have also gained tremendously from discussions with James Allen, Barbara Grosz, Erhard Hinrichs, Aravind Joshi, Hans Kemp, Ethel Schuster, Candy Sidner, and Mark Steedman. References 1. Bauede, R.. Tempora/e Deixis, tempora/e /=rage. Gunter Narr Veriag, Tubigen, 1979. 2. Clark, H. & Marshall, C. Definite Reference and Mutual Knowledge. In Elements of Discourse Understanding, A.K. Joshi, B.L. Webber & I.A. Sag, Ed., Cambridge University Press, Cambridge England, 1981, pp. 10-63. 3. Craln, S. & Steedman, M. On not being Led up the Garden Path: the use of context by the psychological syntax processor. In Natural Language Parsing, D. Dowty, L. Karttunen & A. Zwicky, Ed., Cambridge Univ. Press, Cambridge England, 1985, pp. 320-358. 4. Dowty, D. "The Effects of Aspectual Class on the Temporal Structure of Discourse: Semantics or Pragmatics". Linguistics and Philosophy 9, 1 (February 1986), 37-62. 5. Grosz, B. & Sidner, C. "Attention, Intention and the Structure of Discourse'. Computational Linguistics 12, 3 (July-September 1986), 175-204. 6. Hinrichs, E. "Temporal Ana~ohora in Discourses of English". Linguistics and Philosophy 9, 1 (February 1986), 63-82. 7. McCawley, J. Tense and Time Reference in English. In Studies in Linguistic Semantics, C. Fillmore & D.T. Langendoen, Ed., Hot, Rinehart and Winston, Inc., New York, 1971, pp. 97-114. 8. Moens, M. & Steedman, M. Temporal Ontology in Natural Language. Proc. of the 25th Annual Meeting, Assoc. for Computational Linguistics, Stanford Univ., Palo Alto CA, July, 1987. This volume.. 9, Nakhimovsky, A. Temporal Reasoning in Natural Language Understanding. Proc. of EACL-87, European Assoc. for Computational Linguistics, Copenhagen, Denmark, April, 1987. 10, Nakhimovsky, A. Tense, Aspect and the Temporal Structure of the Narrative. Submitted to Computational Linguistics, special issue on computational approaches to tense and aspect. 11. Partee, B. "Some Structural Analogies between Tenses and Pronouns in English'. Journal of Philosophy 70 (1973), 601-609. 12. Partee, B. "Nominal and Temporal Anaphora". Linguistics and Philosophy 7, 3 (August 1984), 243-286. 13. Passonneau, R. Situations and Intervals. Proc. of the 25th Annual Meeting, Assoc. for Computational Linguistics, Stanford Univ., Palo Alto CA, July, 1987. This volume.. 14. Reichenbach, H.. The Elements of Symbolic Logic. The Free Press, New York, 1966. Paperback edition. 15. Rohrer, C. Indirect Discourse and 'Consecutio Temporum'. In Temporal Structure in Sentence and Discourse, V. Lo Cascio & C. Vet, Ed., Forts Publications, Dordrecht, 1985, pp. 79-98. 16. Schuster, E. Towards a Computational Model of Anaphora in Discourse: Reference to Events and Actions. CIS-MS-66-34, Dept. of Comp. & Info Science, Univ of Pennsylvania, June, 1986. Doctoral thesis proposal.. 17. Sidner, C. Focusing in the Comprehension of Definite Anaphora. In Computational Models of Discourse, M. Brady & R. Berwick, Ed., MIT Press. Cambridge MA, 1982, pp. 267-330. 18. Smith, C. Semantic and Syntactic Constraints on Temporal Interpretation. In Syntax and Semantics, Volume 14: Tense &Aspect, P. Tedesci & A. Zsenen, Ed., Academic Press, 1981, pp. 213-237. 19. Webber, B.L. So What Can We Talk about Now? In Computational Models of Discourse, M. Brady & R. Berwick, Ed., MIT Press, Cambridge MA, 1982, pp. 331-371. 20. Webber, B.L. Event Reference. Theoretical Issues in Natural Language Processing (TINLAP-3), Assoc. for Computational Linguistics, Las Cruses NM, January, 1987, pp, 137-142. 21. Webber, B.L. Two Steps Closer to Event Reference. CLS-86-74, Dept. of Comp. & Info Science, Univ. of Pennsylvania, February, 1987. 154
1987
21
A CENTERING APPROACH TO PRONOUNS Susan E. Brennan, Marilyn W. Friedman, Carl J. Pollard Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304, USA Abstract In this paper we present a formalization of the center- ing approach to modeling attentional structure in dis- course and use it as the basis for an algorithm to track discourse context and bind pronouns. As described in [GJW86], the process of centering attention on en- tities in the discourse gives rise to the intersentential transitional states of continuing, re~aining and shift- ing. We propose an extension to these states which handles some additional cases of multiple ambiguous pronouns. The algorithm has been implemented in an HPSG natural language system which serves as the interface to a database query application. 1 Introduction In the approach to discourse structure developed in [Sid83] and [GJW86], a discourse exhibits both global and local coherence. On this view, a key element of local coherence is centering, a system of rules and constraints that govern the relationship between what the discourse is about and some of the lin- guistic choices made by the discourse participants, e.g. choice of grammatical function, syntactic struc- ture, and type of referring expression (proper noun, definite or indefinite description, reflexive or per- sonal pronoun, etc.). Pronominalization in partic- ular serves to focus attention on what is being talked about; inappropriate use or failure to use pronouns causes communication to be less fluent. For instance, it takes longer for hearers to process a pronominal- ized noun phrase that is no~ in focus than one that is, while it takes longer to process a non-pronominalized noun phrase that is in focus than one that is not [Gui85]. The [GJW86] centering model is based on the fol- lowing assumptions. A discourse segment consists of a sequence of utterances U1 ..... U,~. With each ut- terance Ua is associated a list of forward.looking cen- ~ers, Cf(U,), consisting of those discourse entities that are directly realized or realized I by linguistic ex- pressions in the utterance. Ranking of an entity on this list corresponds roughly to the likelihood that it will be the primary focus of subsequent discourse; the first entity on this list is the preferred cen~er, Cp(U, O. U,~ actually centers, or is "about", only one entity at a time, the backward-looking cen~er, Cb(U=). The backward center is a confirmation of an entity that has already been introduced into the discourse; more specifically, it must be realized in the immediately preceding utterance, Un-1. There are several distinct types of transitions from one utterance to the next. The typology of transitions is based on two factors: whether or not the center of attention, Cb, is the same from Un-1 to Un, and whether or not this entity co- incides with the preferred center of U,~. Definitions of these transition types appear in figure 1. These transitions describe how utterances are linked together in a coherent local segment of dis- course. If a speaker has a number of propositions to express, one very simple way to do this coherently is to express all the propositions about a given en- tity (continuing) before introducing a related entity 1U directly realizes c if U is an utterance (of some phrase, not necessarily a full clause) for which c is the semantic in- terpretation, and U realizes c if either c is an element of the situation described by the utterance U or c is directly real- ized by some subpart of U. Realizes is thus a generalization of directly realizes[G JW86]. 155 cK~)= cM~) cKu.) # cv(~.) Cb(U.) = Cb(U._,) Cb(U.) # Cb(U._,) CONTINUING RETAINING SHIFTING Figure 1 : Transition States (retaining) and then shifting the center to this new entity. See figure 2. Retaining may be a way to sig- nal an intention to shift. While we do not claim that speakers really behave in such an orderly fashion, an algorithm that expects this kind of behavior is more successful than those which depend solely on recency or parallelism of grammatical function. The inter- action of centering with global focusing mechanisms and with other factors such as intentional structure, semantic selectional restrictions, verb tense and as- pect, modality, intonation and pitch accent are topics for further research. Note that these transitions are more specific than focus movement as described in [Sid83]. The exten- sion we propose makes them more specific still. Note also that the Cb of [GJW86] corresponds roughly to Sidner's discourse focus and the Cf to her potential foci. The formal system of constraints and rules for cen- tering, as we have interpreted them from [GJW86], are as follows. For each [7, in [71,..., U,n: • CONSTRAINTS 1. There is precisely one Cb. 2. Every element of Cf(Un) must be realized in U,. 3. Cb(Un) is the highest-ranked element of Cf(U,-1) that is realized in U,. • RULES 1. If some element of Cf(U,-1) is realized as a pronoun in U,, then so is Cb(U,). 2. Continuing is preferred over retaining which is preferred over shifting. As is evident in constraint 3, ranking of the items on the forward center list, Cf, is crucial. We rank the items in Cf by obliqueness of grammatical relation of the subcategorized functions of the main verb: that is, first the subject, object, and object2, followed by other subcategorized functions, and finally, adjuncts. This captures the idea in [GJW86] that subjecthood contributes strongly to the priority of an item on the C/list. CONTINUING... Un+l: Carl works at tIP on the Natural Language Project. Cb: [POLLARD:Carl] Of: ([POLLARD:Carl] [HP:HP] [NATLANG:Natural Language Project]) CONTINUING... U,+2: He manages Lyn. Cb: [POLLARD:Carl] CI: ([POLLARD:A1] [FRIEDMAN:Lyn]) He = Carl CONTINUING... Un+3: He promised to get her a raise. Cb: [POLLARD:A1] el: ([POLLARD:A2] [FRIEDMAN:A3] [I~AISE:Xl]) He = Carl, her = Lyn RETAINING... [/,+4: She doesn't believe him. Cb: [POLLARD:A2] Cf: ([FRIEDMAN:A4] [POLLARD:AS]) She = Lyn, him = Carl Figure 2 We are aware that this ranking usually coincides with surface constituent order in English. It would be of interest to examine data from languages with relatively freer constituent order (e.g. German) to de- termine the influence of constituent order upon cen- tering when the grammatical functions are held con- stant. In addition, languages that provide an identifi- able topic function (e.g. Japanese) suggest that topic takes precedence over subject. The part of the HPSG system that uses the cen- tering algorithm for pronoun binding is called the 156 pragmatics processor. It interacts with another mod- ule called the semantics processor, which computes representations of intrasentential anaphoric relations, (among other things). The semantics processor has access to information such as the surface syntactic structure of the utterance. It provides the pragmat- ics processor with representations which include of a set of reference markers. Each reference marker is contraindexed ~ with expressions with which it can- not co-specify 3. Reference markers also carry infor- mation about agreement and grammatical function. Each pronominal reference marker has a unique in- dex from Ax,...,An and is displayed in the figures in the form [POLLARD:A1 L where POLLARD is the semantic representation of the co-specifier. For non-pronominal reference markers the surface string is used as the index. Indices for indefinites are gen- erated from XI,..., X,~. 2 Extension The constraints proposed by [GJW86] fail in certain examples like the following (read with pronouns de- stressed): Brennan drives an Alfa Romeo. She drives too fast. Friedman races her on weekends. She often beats her. This example is characterized by its multiple am- biguous pronouns and by the fact that the final ut- terance achieves a shift (see figure 4). A shift is in- evitable because of constraint 3, which states that the Cb(U,~) must equal the Cp(U,-I) (since the Cp(Un-x) is directly realized by the subject of Un, "Friedman"). However the constraints and rules from [GJW86] would fail to make a choice here between the co-specification possibilities for the pronouns in U,. Given that the transition is a shift, there seem to be more and less coherent ways to shi~. Note that the three items being examined in order to characterize the transition between each pair of anchors 4 are the = See [BP80] and [Cho80] for conditions on coreference 3 See [Sid83] for definition and discussion of co-specification. Note that this use of co-specification is not the saxne as that used in [Se185] 4An anchor is a < Cb, Of > pair for an utterance Cb(U,,) = cpW.) Cb(V,,) # cp(u.) CbW.) = cb(~z._~) cbw.) # CbW,,_,) CONTINUING RETAINING SHIFTING-I SHIFTING Figure 3 : Extended Transition States Cb of U,,-1, the Cb of U,~, and the Cp of Un. By [GJW86] a shift occurs whenever successive Cb's are not the same. This definition of shifting does not consider whether the Cb of U, and the Cp of Un are equal. It seems that the status of the Cp of Un should be as important in this case as it is in determining the retaining/continuing distinction. Therefore, we propose the following extension which handles some additional cases containing mul- tiple ambiguous pronouns: we have extended rule 2 so that there are two kinds of shifts. A transition for Un is ranked more highly if Cb(Un) = Cp(U,); this state we call shifting-1 and it represents a more coherent way to shift. The preferred ranking is continuing >- retaining >- shifting-1 ~ shifting (see figure 3). This extension enables us to successfully bind the "she" in the final utterance of the example in figure 4 to "Friedman." The appendix illustrates the application of the algorithm to figure 4. Kameyama [Kam86] has proposed another exten- sion to the [G:JW86] theory - a property-sharing con- straint which attempts to enforce a parallellism be- tween entities in successive utterances. She considers two properties: SUBJ and IDENT. With her exten- sion, subject pronouns prefer subject antecedents and non-subject pronouns prefer non-subject antecedents. However, structural parallelism is a consequence of our ordering the Cf list by grammatical function and the preference for continuing over retaining. Further- more, the constraints suggested in [GJW86] succeed in many cases without invoking an independent struc- tural parallelism constraint, due to the distinction between continuing and retaining, which Kameyama fails to consider. Her example which we reproduce in figure 5 can also be accounted for using the contin- 157 CONTINUING... U,,+I: Brennan drives an Alfa Romeo. Cb: [BRENNAN:Brennan] C f: ([BRENNAN:Brennan] [X2:Alfa Komeo]) CONTINUING... U,,+2: She drives too fast. Cb: [BRENNAN:Brennan] C f: ([BRENNAN:AT]) She = Brennan RETAINING... U,~+s: Friedman races her on weekends. Cb: [BRENNAN:A7] C f: ([FRIEDMAN:Friedman] [BI~ENNAN:A8] [WEEKEND:X3]) her = Brennan SHIFTING-l_. Un+4: She often beats her. Cb: [FRIEDMAN:Friedman] Of: ([FRIEDMAN:A9] [BRENNAN:A10]) She = Friedman, her = Brennan Figure 4 CONTINUING... U,~+I: Who is Max waiting for? Cb: [PLANCK:Max] Of: ([PLANCK:Max]) CONTINUING... Un+2: He is waiting for Fred. Cb: [PLANCK:Max] C.f: ([PLANCK:A1] [FLINTSTONE:Fred]) He = Max CONTINUING... U,~+3: He invited him to dinner. Cb: [PLANCK:A1] of: ([PLANCK:A2] [FLINTSTONE:A3]) He - Max, him = Fred Figure 5 uing/retaining distinction s. The third utterance in this example has two interpretations which are both consistent with the centering rules and constraints. Because of rule 2, the interpretation in figure 5 is preferred over the one in figure 6. 3 Algorithm for centering and pronoun binding There are three basic phases to this algorithm. First the proposed anchors are constructed, then they are filtered, and finally, they are classified and ranked. The proposed anchors represent all the co- specification relationships available for this utterance. Each step is discussed and illustrated in figure 7. It would be possible to classify and rank the pro- posed anchors before filtering them without any other changes to the algorithm. In fact, using this strategy 5It seems that property sharing of I'DENT is still necessary to account for logophoric use of pronouns in Japanese. CONTINUING... U,~+~: Who is Max waiting for? Cb: [PLANCK:Max] el: ([PLANCK:Max]) CONTINUING... U,~+2: He is waiting for Fred. Cb: [PLANCK:Max] el: ([PLANCK:A1] [FLINTSTONE:Fred]) he = Max RETAINING... Ur=+3: He invited him to dinner. Cb: [PLANCK:A1] el: ([FLINTSTONE:A3] [PLANCK:A2]) He = Fred, him = Max Figure 6 158 I. CONSTRUCT THE PROPOSED ANCHORS for Un (a) Create set of referring expressions (RE's). (b) Order KE's by grammatical relation. (c) Create set of possible forward center (C f) lists. Expand each element of (b) according to whether it is a pronoun or a proper name. Expand pronouns into set with entry for each discourse entity which matches its agreement features and expand proper nouns into a set with an entry for each possible referent. These expansions are a way of encoding a disjunction of possibilities. (d) Create list of possible backward centers (Cb's). This is taken as the entities f~om Cf(U,-1) plus an additional entry of NIL to allow the possibility that we will not find a Cb for the current utterance. (e) Create the proposed anchors. (Cb-O.f combinations from the cross-product of the previous two steps) 2. FILTER THE PROPOSED ANCHORS For each anchor in our list of proposed anchors we apply the following three filters. If it passes each filter then it is still a possible anchor for the current utterance. (a) Filter by contraindices. That is, if we have proposed the same antecedent for two contraindexed pronouns or if we have proposed an antecedent for a pronoun which it is contraindexed with, eliminate this anchor from consideration. (b) Go through Cf(U,_,) keeping (in order) those which appear in the proposed Cf list of the anchor. If the proposed Cb of the anchor does not equal the first ele- ment of this constructed list then eliminate this anchor. This guarantees that the Cb will be the highest ranked element of the Cf(U,-t) realized in the current utter- ance. (This corresponds to constraint 3 given in section t) (c) If none of the entities realized as pronouns in the pro- posed C[ list equals the proposed Cb then eliminate this anchor. This guarantees that if any element is re- alized as a pronoun then the Cb is realized as a pronoun. (If there are no pronouns in the proposed C[ list then the anchor passes this filter. This corresponds' to rule 1 in section 1). This rule could be implemented as a preference strategy rather than a strict filter. 3. CLASSIFY and BANK EXAMPLE: She doesn't believe him. (U,+4 from figure 2) = ([A4] [AS]) =t, ([A4] [AS]) =~ ([FRIEDMAN:A4] [POLLARD:A5]) => ([POLLARD:A2] [FKIEDMAN:A3] [KAISE:XI] NIL). =~ There are four possible < Cb, Cf > pairs for this utterance. i. <[POLLARD:A2], (['FRIEDMAN:A4] [POLLARD:A5])> ii. <[FRIEDMAN:A3], ([FRIEDMAN:A4] [POLLARD:A5])> iii. <[KAISE:X1], ([FRIEDMAN:A4] [POLLARD:A$])> iv. <NIL, ([FRIEDMAN:A4] [POLLARD:A5])> =~ This filter doesn't eliminate any of the proposed anchors in this example. Even though [A4] and [A5] are contraindexed we have not proposed the same co-specifier due to agreement. =~ This filter eliminates proposed anchors ii, iii, iv. =~ This filter doesn't eliminate any of the proposed anchors. The proposed Cb was realized as a pronoun. (a) Classify each anchor on the list of proposed anchors by =~ Anchor i is classified as a retention based on tim transition the transitions as described in section 1 taking U,~-t to state definition. be the previous utterance and U, to be the one we are currently working on. (b) Rank each proposed anchor using the extended rank- =~ Anchor i is the most highly ranked anchor (trivially). ing in section 2. Set Cb(Un) to the proposed Cb and Cf(Un) to proposed Cf of the most highly ranked an- chor. Figure 7 : Algorithm and Example 159 one could see if the highest ranked proposal passed all the filters, or if the next highest did, etc. The three filters in the filtering phase may be done in parallel. The example we use to illustrate the algorithm is in figure 2. 4 Discussion 4.1 Discussion of the algorithm The goal of the current algorithm design was concep- tual clarity rather than efficiency. The hope is that the structure provided will allow easy addition of fur- ther constraints and preferences. It would be simple to change the control structure of the algorithm so that it first proposed all the continuing or retaining anchors and then the shifting ones, thus avoiding a precomputation of all possible anchors. [GJW86] states that a realization may contribute more than one entity to the Cf(U). This is true in cases when a partially specified semantic descrip- tion is consistent with more than one interpreta- tion. There is no need to enumerate explicitly all the possible interpretations when constructing pos- sible C f(U)'s 6, as long as the associated semantic theory allows partially specified interpretations. This also holds for entities not directly realized in an ut- terance. On our view, after referring to "a house" in U,,, a reference to "the door" in U,~+I might be gotten via inference from the representation for '% house" in Cf(Un). Thus when the proposed anchors are constructed there is no possibility of having an infinite number of potential Cf's for an utterance of finite length. Another question is whether the preference order- ing of transitions in constraint 3 should always be the same. For some examples, particularly where U,~ contains a single pronoun and U,~-I is a reten- tion, some informants seem to have a preference for shifting, whereas the centering algorithm chooses a continuation (see figure 8). Many of our informants have no strong preference as to the co-specification of the unstressed "She" in Un+4. Speakers can avoid ambiguity by stressing a pronoun with respect to its phonological environment. A computational system 6 Barbara Grosz, personal communication, and [GJW86] CONTINUING... Ur,+1: Brennan drives an Alfa P~omeo. Cb: [BRENNAN:Brennan] el: ([BRENNAN:Brennan] [ALFA:X1]) CONTINUING... U,~+2: She drives too fast. Cb: [B1LENNAN:Brennan] C f: ([BRENNAN:A7]) She - Brennan RETAINING... Un+3: Friedman races her on weekends. Cb: [BB.ENNAN:A7] C,f: ([FRIEDMAN:Friedman] [BRENNAN:A8]) [WEEKEND:X3]) her -- Brennan CONTINUING... U,~+4: She goes to Laguna Seca. Cb: [BI~ENNAN:A8] C f: ([BRENNAN:A9] [LAG-SEC:Laguna Seca]) She - Brennan?? Figure 8 for understanding may need to explicitly acknowledge this ambiguity. A computational system for generation would try to plan a retention as a signal of an impending shift, so that after a retention, a shift would be preferred rather than a continuation. 4.2 Future Research Of course the local approach described here does not provide all the necessary information for interpret- ing pronouns; constraints are also imposed by world knowledge, pragmatics, semantics and phonology. There are other interesting questions concerning the centering algorithm. How should the centering algorithm interact with an inferencing mechanism? Should it make choices when there is more than one proposed anchor with the same ranking? In a database query system, how should answers be in- 160 corporated into the discourse model? How does cen- tering interact with a treatment of definite/indefinite NP's and quantifiers? We are exploring ideas for these and other exten- sions to the centering approach for modeling reference in local discourse. 5 Acknowledgements We would like to thank the following people for their help and insight: Hewlett Packard Lab's Natu- ral Language group, CSLI's DIA group, Candy Sid- net, Dan Flickinger, Mark Gawron, :John Nerbonne, Tom Wasow, Barry Arons, Martha Pollack, Aravind :Joshi, two anonymous referees, and especially Bar- bara Grosz. 6 Appendix This illustrates the extension in the same detail as the example we used in the algorithm. The number- ing here corresponds to the numbered steps in the algorithm figure 7. The example is the last utterance from figure 4. EXAMPLE: She often beats her. I. CONSTRUCT THE PROPOSED AN- CHORS (a) ([Ag] [A10]) (b) ([A9] [A10]) (c) (([FRIEDMAN:A9] [FRIEDMAN:A10]) ([FRIEDMAN:A9] [BRENNAN:A10]) ([BRENNAN:A9] [BRENNAN:A10]) ([BRENNAN:A9] [FRIEDMAN:A10])) (d) ([FRIEDMAN:Friedman] [BRENNAN:A8] [WEEKEND:X3] NIL) (e) There are 16 possible < Cb, Cf > pairs for this utterance. i. <[FRIEDMAN:Friedman], ([FRIEDMAN:Ag] [FRIEDMAN:A10])> ii. <[FRIEDMAN:Friedman], ([FRIEDMAN:A9] [BRENNAN:A10])> iii. <[FRIEDMAN:Friedman], ([BRENNAN:A9] [FRIEDMAN:A10]) > iv. < [FRiEDMAN:Friedmaa], ([BRENNAN:A9] [BRENNAN:A10])> v. <[BRENNAN:A8], ([FRIEDMAN:Ag] [FRIEDMAN:A10])> vi. <[BRENNAN:A8], ([FRIEDMAN:Ag] [BRENNAN:A10])> vii. <[BRENNAN:A8], ([BRENNAN:A9] [FRIEDMAN:A10])> viii. <[BRENNAN:A8], ([BRENNAN:A9] [BRENNAN:A10])> ix. <[WEEKEND:X3], ([FRIEDMAN:Ag] [FRIEDMAN:A10])> x. <[WEEKEND:X3], ([FRIEDMAN:Ag] [BRENNAN:A10])> xi. <[WEEKEND:X3], ([BRENNAN:Ag] [FRIEDMAN:A10])> xii. <[WEEKEND:X3], ([BRENNAN:A9] [BRENNAN:A10])> xiii. <NIL, ([FRIEDMAN:Ag] [FRIEDMAN:A10])> xiv. <NIL, ([FRIEDMAN:A9] [BRENNAN:A10])> xv. <NIL, ([BRENNAN:Ag] [FRIEDMAN:A10])> xvi. <NIL, ([BRENNAN:A9] [BRENNAN:A10])> 2. FILTER THE PROPOSED ANCHORS (a) Filter by contraindices. Anchors i, iv, v, viii, iz, zii, ziii, zvi are eliminated since [A9] and [A10] are contraindexed. (b) Constraint 3 filter eliminates proposed an- chors vii, ix through zvi. (c) Rule 1 filter eliminates proposed anchors iz through zvi. 3. CLASSIFY arid RANK (a) After filtering there are only two anchors left. ii: <[FRIEDMAN:Friedman], ([FRIEDMAN:Ag] [BRENNAN:A10])> iii: <[FRIEDMAN:Friedman], ([BRENNAN:A9] [FRIEDMAN:A10])> Anchor ii is classified as shifting-1 whereas anchor iii is classified as shifting. (b) Anchor ii is more highly ranked. 161 References [BPS0] [Cho80] [GJW83] [GJw861 [Gs85] [Gui85] [Kam86] [Se185] [SH841 [Sid81] E. Bach and B.H. Partee. Anaphora and semantic structure. In J. Kreiman and A. Ojeda, editors, Papers from the Parases. sion on Pronouns and Anaphora, pages 1- 28, CLS, Chicago, IL, 1980. N. Chomsky. On binding. Linguistic In- quiry, 11:pp. 1-46, 1980. B.J. Grosz, A.K. Joshi, and S. Weinstein. Providing a unified account of definite noun phrases in discourse. In Proc., Blst Annual Meeting of the ACL, Association of Com- putational Linguistics, pages 44-50, Cam- bridge, MA, 1983. B.J. Grosz, A.K. Joshi, and S. Weinstein. Towards a computational theory of dis- course interpretation. Preliminary draft, 1986. B.J. Gross and C.L. Sidner. The Strnc. ture of Discourse Structure. Technical Re- port CSLI-85-39, Center for the Study of Language and Information, Stanford, CA, 1985. R. Guindon. Anaphora resolution: short term memory and focusing. In Proc., 238t Annual Meeting of the ACL, Association of Computational Linguistics, pages pp. 218-- 227, Chicago, IL, 1985. M. Kameyama. A property-sharing con- straint in centering. In Proc., 24st Annual Meeting of the A CL, Association of Com- putational Linguistics, pages pp. 200-206, New York, NY, 1986. P. Sells. Coreference and bound anaphora: a restatement of the facts. In Choe Berman and McDonough, editors, Proceed- ings of ]gELS 16, GLSA, University of Mas- sachusetts, 1985. I. Sag and J. Hankamer. Towards a theory of anaphoric processing. Linguistics and Philosophy, 7:pp. 325-345, 1984. C.L. Sidner. Focusing for interpretation of pronouns. American Journal of Computa- tional Linguistics, 7(4):pp. 217-231, 1981. [Sid83] C.L. Sidner. Focusing in the comprehen- sion of definite anaphora. In M. Brady and R.C. Berwick, editors, Computational Models of Discourse, MIT Press, 1983. 162
1987
22
NOW LET'S TALK ABOUT NOW: IDENTIFYING CUE PHRASES INTONATIONALLY Julia Hirschberg AT&T Bell Laboratories Murray Hill, New Jersey 07974 Diane Litman AT&T Bell Laboratories Murray Hill, New Jersey 07974 ABSTRACT Cue phrases are words and phrases such as now and by the way which may be used to convey explicit information about the structure of a discourse. However, while cue phrases may convey discourse structure, each may also be used to different effect. The question of how speakers and hearers distinguish between such uses of cue phrases has not been addressed in discourse studies to date. Based on a study of now in natural recorded discourse, we pro- pose that cue and non-cue usage can be distinguished into- nationally, on the basis of phrasing and accent. I. Introduction Cue phrases are linguistic expressions -- such as okay, but, now, anyway, by the way, in any case, that reminds me -- which may, instead of making a 'semantic' contribution to an utterance (i.e., affecting its truth conditions), be used to convey explicit information about the structure of a discourse [4], [16], [5]. 1 For example, anyway can indi- cate a topic return and that reminds me can signal a digres- sion. The recognition and generation of cue phrases is of considerable interest to research in natural language pro- cessing. The structural information conveyed by these phrases is crucial to tasks such as anaphora resolution [6], [5], [16] and the identification of rhetorical relations among portions of a text or discourse [11], [8], [16]. It has also been claimed that the incorporation of cue phrases into natural language processing systems helps reduce the complexity of discourse processing [21], [4], [10]. Despite the recognized importance of cue phrases, many questions about how they are defined both individually and as a class -- and how they are to be represented, gen- erated, and recognized -- remain to be examined. For example, in the general case, each lexical item that can serve as a 'cue phrase' also has an alternate interpreta- tion. 2 While the 'cue' interpretation provides explicit 1. Previous literature has employed the terms 'clue word', 'discourse marker' or 'discourse particle' for these items [16], [4], [14], [18]. More recently Grosz and Sidner [5] have proposed the term cue phrase for these items, which we will adopt in this paper. 2. If 'non-lexical' items such as uh are classed as cue phrases, then this generalization may not hold for all cue phrases. However, information about the structure of a discourse, the 'non- cue' interpretation provides quite different information, such as conjunction (but) or adverbial modification (any- way). Distinguishing between these two uses is critical to the interpretation of discourse. In this paper, we address the problem of how this distinction might be made: We propose that, in speech, this distinction is made intona- tionally. We support our hypothesis by an analysis of cue and non-cue uses of the item now in recorded naturally occurring discourse. In Section 2 we discuss the general problem of distinguish- ing between cue and non-cue usage and consider possible alternatives to our hypothesis. In Section 3 we present relevant aspects of the theory of English intonation assumed here for our analysis [13], [9]. Section 4 describes our data, presents the results of our analysis, and along with Section 5, discusses the implications of our results for the identification of cue phrases in general -- both in speech and in written text. 2. The Problem Previous definitions of cue phrases as a class have been extensional and definitions of particular cue phrases pro- cedural. For example, now signals a 'push' or 'pop' [5] of the attentional stack or 'further development' of a previ- ous context [16]. Despite some recognition [5] that cue phrases are not always employed as cue phrases, no attempt has been made to discover how 'cue' uses of cue phrases are distinguished from 'non-cue' uses. When does now, for example, function as a discourse marker and when is it deictic? Roughly, the non-cue or deictic use of now makes refer- ¢nce to a span of time which minimally includes the utter- ance time. This time span may include little more than moment of utterance, as in I, or it may be of indeter- minate length, as in 2. 3 even uh appears to have both 'cue' and 'non-cue' uses; i.e., it may signal a digression or interruption, or it may simply serve as a pause filler. 3. These and other examples are taken from a radio call-in program, Harry Gross's "Speaking of Your Money" [15]. The corpus will be described in more detail in Section 4. 163 1. Fred: Yeah I think we'll look that up and possibly uh after one of your breaks Harry. Harry: OK we'll take one now. Just hang on Bill and we'll be right back with you. o Harry: You know I see more coupons now than I've ever seen before and I'll bet you have too, In contrast, the cue use of now signals a return to a previ- ous topic, as in the two examples of now in 3, or intro- duces a subtopic, as in 4. . Harry:Fred whatta you have to say about this IRA problem? Fred: Ok. You see now unfortunately Harry as we alluded to earlier when there is a distribution from an IRA that is taxable ...{discussion of caller's beneficiary status}... Now the the five thousand that you're alluding to uh of the -- 4. Doris: I have a couple quick questions about the income tax. The first one is my husband is retired and on social security and in '81 he few odd jobs for a friend uh around the property and uh he was reimbursed for that to the tune of about $640. Now where would he where would we put that on the form? While the distinction between cue and non-cue now seems fairly clear in the above examples, other cases are more difficult. Consider 5: 5, Ethel: All right I have just retired from a position that I've been in for forty some odd years. I have -- I earned in 1981 about thirty thousand dollars. Now I have a profit sharing coming to me. My problem is shall I take the ten year averaging... From the transcription alone, either a cue or a non-cue interpretation is plausible. The caller might have a profit sharing due her at the moment of utterance (non-cue). Or, she might be using now to mark profit sharing as a subtopic (cue) -- leaving the time of the profit sharing unspecified. How then do hearers distinguish cue from non-cue uses? One might propose that hearers use tense to delimit cases in which deictic now is vossible. That is, it would seem reasonable to propose that deictic now occurs only when the verb modified by now (or the main verb of the clause so modified) is temporally compatible -- i.e., non.past. For example, using the past tense in 1 -- we took one now -- seems distinctly odd. However, we took one just now is clearly felicitous. So, both cue and non-cue now are possi- ble when the main verb is in the past tense. As examples 1- 3 above illustrate, both are also possible when the main verb is in the present tense. So, tense is clearly inade- quate to distinguish between cue and non-cue uses of now. Another possible diagnostic for non-cue now might be some notion of the general felicity of temporal reference in an utterance -- which might correspond to the felicity of substituting other temporal adverbials for now. For exam- ple, we'll take one in an hour would be felicitous in 1, as would I see more coupons these days in 2. Substituting other temporals for now in either example 3 (Today the the five thousand that you're alluding to...) or example 4 (Mon- day where would he where would we put that on the form?) would be infelicitous. However, this is only a necessary -- but hot a sufficient -- test for deictic now. While a tem- poral adverbial may be substituted for now in 5 (e.g., Today I have a profit sharing coming to me), both cue and non-cue interpretations appear equaliy plausible from the transcription, as noted above. In fact, listeners have no hesitation in labeling this a cue now. A third possibility is that hearers use surface order posi- tion to distinguish cue from non-cue uses. In fact, most systems that generate cue phrases assume a canonical (usu- ally first) position within the clause [16], [21]. However, without intonational information, surface position may itself be unclear. Consider Example 6: , Evelyn: I see. So in other words I will have to pay the full amount of the uh of the tax now what about Pennsylvania state tax? Can you give me any information on that? Although a cue reading is possible, most readers would assign now a non-cue interpretation if it is associated with the preceding clause, I will have to pay the full amount of the...tax now -- but a cue interpretation if it is associated with the succeeding clause, Now what about Pennsylvania state tax?. The actual recording of 6 clearly supports the latter interpretation: the strong intonational boundary between tax and now identifies the clausal boundary -- and, thus, indirectly, the surface position of now within its clause. Similarly, 7 would be ambiguous between a cue reading, Well now, you've got another point, and a deictic reading, Well, now you've got another point -- without into- national cues: 164 7, Fred: You stand up for your rights. Whatever you give to charity you claim. Linda:(laughs) I don't want the hassle of an of an Fred: Well now you've got another point and I think at at times the service counts on the fact that people don't want the hassle -- and maybe we as Americans have to stand up a little bit more and claim what's due us. Here it is clear from the recording that Fred intended the deictic use. Later, we will present evidence from our corpus that cue now can appear clause-finally, and non-cue now, clause.initially. So, surface position also appears inadequate to distinguish cue from non-cue now. Finally, hearers might use syntactic information to discriminate between cue and non-cue usage. At least for now, this seems unlikely. Both cue and non-cue now's are commonly classed as adverbials. So syntactic category does not differentiate. Furthermore, both can be attached at the sentence level. While non-cue now may also modify VP, it is difficult to imagine attaching cue now at that level -- since, by definition, it can make no 'semantic' con- tribution to either S or riP. However, this potential attachment distinction does not provide a means of distin- guishing cue from non-cue now -- rather, attachment possi- bilities must be based on the prior cue/ non-cue distinc- tion. So, syntactic structure provides no useful clues to the identification of cue versus non-cue usage in this case. In summary, neither tense, nor the 'appropriateness' of temporal modification (or lack thereof), nor surface posi- tion, nor syntactic structure provides adequate information for distinguishing between cue and non-cue now. As we. will show in the remainder of this paper, however, intona- tional features do provide such information. 3. Phrasing and Accent In English The importance of intonational information to the com- munication of discourse structure has been recognized in a variety of studies [7], [20], [2], [17], [1]. However, just which intonational features are important and how they communicate discourse information is not well understood. Under-utilization of objective measures of intonational features in empirical research and the lack of a sufficiently explicit system for intonational description have made it difficult to compare and evaluate specific claims. For our study we have examined fundamental frequency (F0) con- tours produced using an autocorrelation pitch tracker developed by Mark Liberman. As a system of intona-- tional description, we have adopted Pierrehumbert's [13] theory of English intonation. In Pierrehumbert's system, intonational contours are described as sequences of low (L) and high (H) tones in the F0 (fundamental frequency) contour. A well-formed intermediate phrase consists of one or more pitch accents, which are aligned with stressed syllables (with alignment indicated by *) on the basis of the metrical pattern of the text and signify intonational prominence, and a simple high (H) or low (L) tone that represents the phrase accent.• The phrase accent controls the pitch between the last pitch accent of the current intermediate phrase and the beginning of the next -- or the end of the utterance. Into- national phrases are larger phonological units, composed of one of more intermediate phrases. At the end of an intonational phrase, a boundary tone, which may also be It or L and is indicated by '%', falls exactly at the phrase boundary. So, each intonational phrase ends with a phrase accent and a boundary tone. A phrase's tune, or melody, has as its domain the intona- tional phrase. It is defined by the sequence of pitch accent(s), phrase accent(s), and boundary tone of that phrase. For example, an ordinary declarative pattern with a final fall is represented as H* L L% -- that is, a tune with H* pitch accent(s), a L phrase accent, and a L% boundary tone. Consider the pitch track in Figure 1 representing a simple intonational phrase composed of one intermediate phrase and with a typical declarative contour. (For ease of comparison of intonational features here, we present pitch contours of synthetic speech, produced with the Bell Labs Text-to-Speech System [12]. The analysis we will present in Section 4 is based upon recorded natural speech.) p - I a I a~ i , :-!-: ! i . : ~ i .I.-~ ..... L_ ' ._1 Z . L _~.o e t • ~ • k~hb.au g a au 1 4 $ I ? | 9 lo 1.1 ............... E ~ ............................ ~i ........... ~i ~' L~"";'~-'r ............. iI ....... ~i ............. i Figure 1. A Simple Declarative Contour All the pitch accents in this phrase, including the nuclear accent -- the primary stressed syllable -- are high (H*). The phrase accent is L and the boundary tone is also low (L%). A given sentence may be uttered with considerable varia- tion in phrasing. For example, in Figure 1 Now let's talk about 'now' was produced as a single intonational phrase, whereas in Figure 2 Now is set off as a separate phrase. 165 1 .... I/ .~ ,. , T, . / ' ~ ! : . - ~ . . . . . _ a ........ I: .,'x..:_.- .......... --~ I. \ I ~ ' ' ~ .~'"-~- i \2 !i V ! I I'*: - ~ 1 ! I ~ .' i'~ ~ ..................... r: . . . . . . . . . . . . . T-r- -T ....... i !- :_1 ........ 1: : " I' I:I!L L___i_=___] Figure 2. Two Phrases The occurrence of phrase accents and boundary tones, together with other phrase-final characteristics such as pauses and syllable lengthening, enable us to identify intermediate and intonational phrases in natural as well as in synthetic speech. Pitch accents, peaks or valleys in the F0 contour which fall on the stressed syllables of lexical items, make those items intonationally prominent. In Figure 3, the first instance of now has no pitch accent, while the second receives nuclear stress. (In our notation, the absence of a specified accent indicates that a word is not accented.) i!i ' ! i*= I - ............. ; ~' ~ ............. 1-:- ~-~ ......... : ............... i , i \ i -t .i ,,,~ i ~..,,!t, • ~ • I~,.,~,~ ~I " ! i I ! ~ , : o ~ 3 3 ' 4 ? $ II ~1o s'l i I I i i' i:i ! i i i ' i!:' ': i!ii_i__i ........ L .... i Figure 3. Deaccenting 'Now' Contrast Figure 3 with Figure 1. In Figure 3, the first f0 peak occurs on let's; in Figure 1, the first peak occurred on now. A pitch accent consists either of a single tone or an ordered pair of tones, such as L*+H. The tone aligned with the stressed syllable is indicated by a star (*); thus, in an L*+H accent, the low tone (L*) is aligned with the stressed syllable. There are six pitch accants in English: two simple tones -- H and L -- and four complex ones -- L*+H, L+H*, H*+L, and H+L*. The most common accent, H*, comes out as a peak on the accanted syllable (as, on Now in Figure 1). L* accants occur much lower in the pitch range than H* and are phonetically realized as local f0 minima. The acnant on Now in Figure 4 is a L*. i 1 : ; i " • • ',"'l"l" ", " ;V; ....................... i - .................... E! • _1 I V'T- "F .............. V; .............. :~ ..... ~ ..................... i ................ 1_~..2 ...... ~ ....... ! Li_', . . . . . - . . . . . . . . Figure 4. Low Accent on 'Now' The other English accents have two tones. Figure 5 shows a version of the senten~ in Figures 1-4 with a L+H* accent on the first instanc, of now. i I I ! : . , +~, _~ ,- __ ~ / / l ........ .................. :. :. .......................... ............ • " / , '. i ;. . , ' ....... [ ! :: ,,~! i i . t k i i." ~: />.i: i e., . . . . . . . . . . L_l ................ ', I t....'# ! '.; " " : i I L.~f, . • . t i ....... a ...... '1 i . • S e ~ i I ~e ~ E- I ...... r r , : ! : =__ 2_ _L:t _i . . . . t__" __.t .! _:__ .' ...... ~ ..... Figure 5. An L+H* Accent Note that there is a peak on now (H*) -- as there was in Figure 1 -- but now a striking valley (L) occurs just before this peak. While other intonational features, such as overall tune or pitch range, 4 may also provide information about cue phrase interpretation, so far we have found the most signi- ficant results by comparing accent and phrasing for cue and non-cue now. 166 4. Intonational Characteristics of Cue and Non-Cue Now To investigate our hypothesis that cue and non-cue uses of Linguistic expressions can be distinguished intonationally, we conducted a study of the cue phrase now in recorded natural speech. Our corpus consisted of recordings of four days of "The Harry Gross Show: Speaking of Your Money", recorded during the week of I February 1982 [1S]. In this Philadelphia radio call-in program, Gross offers financial advice to callers; for the 3 February show, he was joined by an accountant friend, Fred Levy. The four shows provided approximately ten hours of conversa- tion between expert(s) and callers. We chose now to begin our study of cue phrases for several reasons. First, our corpus contained numerous instances of both cue and non-cue now (approximately 350 in all). In contrast, phrases such as anyway, anyhow, therefore, moreover, and furthermore appear fewer than ten times each. A second reason for our choice of now is that now often appears in conjunction with other cue phrases (as with well in 7, or I see now, now another thing, ok now, right now.) This allows us to study how adjacent cue phrases interact with one another. Third, now has a number of desirable phonetic characteristics. As it is monosyllabic, possible variation in stress patterns do not arise to complicate the analysis. Because it is completely voiced and introduces no segmental effects into the f0 con- tour, it is also easier to analyze pitch tracks reliably. 4.1 Sample One Our first sample consisted of 48 occurrences of now -- all the instances from two sides of tapes of the show chosen at random. 5 The 48 tokens were produced by fifteen dif- ferent speakers; 22.9% were produced by Harry Gross and 77.1% by other speakers. We analyzed this data in the following way: First, three people (including the authors) determined by ear whether individual tokens were cue or non-cue. We then digitized and pitch-tracked the intonational phrase containing each token, plus (where same speaker) the preceding and succeeding intonational phrases. For this study we com- pared cue and non-cue uses along several dimensions: 1) We examined whether each instance of now was accented and, if so, noted the type of accent employed. 2) We identified differences in phrasing, including in particular whether or not now represented an entire intermediate or intonational phrase. 3) We noted where now occurred positionally in its intonational and its intermediate phrase, 4. The pitch range of an intonational phrase is deemed by its topline - roughly, the highest peak in the f0 contour of the phrase - and the speaker's baseline - the lowest point the speaker realizes in normal speech, measured across all utterances. Since the baseline is rarely realized in an utterance, pitch ranges may be compared for a given speaker by comparing toplines. 5. Two instances were excluded from this sample since the phrasing was unavailable due to hesitation or interruption. whether first, not first but preceded only by other cue phrases, last, or none of these. 4) We looked at the type of intonational contour used over the phrase in which now occurred. 5) We noted when now occurred with (linearly adjacent to) other cue phrases. 6) We identified the posi- tion of the phrase containing now with respect to speaker turn. Of these, (1-3) turned out to distinguish between cue and non-cue now quite reliably. That is, accent type and phrasing distinguished between all 48 of the tokens in the sample. Just over one-third of our sample (17) were determined to be non-cue and just under two-thirds (31) cue. The first striking difference between the two appeared in phrasing, as illustrated in Table I: Of all the non-cue uses of now, none appeared as the only item in an intonational or inter- mediate phrase, while fully 42.0% of cue now represented entire intonational or intermediate phrases. (Of these 13 cue now's, 8 were t~c only lexical item in a full intona- tional phrase.) A X test of association between cue/non- cu~ status and phrasing shows significance at the .005 level (X~(I)--9.8). 6 So, this sample suggests that now's which INPHRASE WHOLEPHRASE NON-CUE 17 0 CUE 18 13 Table 1. Phrasing for Cue and Non-Cue Now are set apart as separate intermediate or intonational phrases are very likely to be cue news. Another clear distinction between cue and non-cue now's in this sample emerged when we examined the position of now within its intermediate phrase. As Table 2 illustrates, all 31 cue now's were 'first' (30 were absolutely first and FIRST LAST OTHER NON-CUE 3 I0 4 CUE 31 0 0 Table 2. Position within Intermediate Phrase 6. The ×2 test measures the degree of association between two vari- ables by calculating the probability (.p) that the disparity between expected and actual values in each cell is due to chance. The value of X 2 itself for (n) degrees of freedom (d.f.) is an overall measure of this disparity. The data show in Table 1 have ×2 = 9.8 for 1 d.f., p < .005. That is, there is less than a .5% probability that this apparent association is due to chance. Roughly. p < .01 or better isgenerally accepted as indicating 'statistical significance'; p > .01 becomes more controversial; p > .05 is generally considered not statistically significant; and p > .2 is good indication of a lack of discernible association between two variables. So, the data in Table 1, which are significant at the .001 level, appear very reli- ably associated. 167 one followed another cue phrase) in their phrase. Not only were these first in intermediate phrase -- they were also first in their (larger) intonational phrase. Only three non-cue now's occupied a similar position (again, with one following a cue phrase). However, I0 non-cue now's (58.8%) were last in their intermediate phrase -- and half of these were last in their intonational phrase. Again, the data show a very strong association (×"(2)=36.0, p < .001). So, once intonational phrasing is determined, cue and non-cue now are generally distinguishable by position within the phrase, with cue now's tending to come first in intonational phrase and non-cue now's last (at least in intermediate phrase and often in intonational phrase as well). Finally, cue and non-cue occurrences in this sample were distinguishable in terms of presence or absence of pitch accent -- and by type of pitch accent, where accented. Because of the large number of possible accent types, and since there are competing reasons to accent or deaccent items, ./ we might expect these findings to be less clear than those for phrasing. In fact, although their interpreta- tion is more complicated, the results are equally striking. The overzll results of the 46 occurrences from this sample for which accent type could be precisely determined 8 are presented in Table 3: DEACCENTED H*orCOMPLEX L* NON-CUE 2 15 0 CUE 13 10 6 Table 3. Accenting of Cue and Non-Cue Now Note first that large numbers of cue and non-cue tokens were uttered with a H* or complex accent (34.5% of cue and fully 88.2% of non-cue), The chief similarity here lies in the use of the H* accent type, with 9 cue uses and 8 non-cue (and 2 other non-cue tokens are either H* or complex). Note also that cue now's were much more likely overall to be deaccented (44.8% vs. 13.3%). No non-cue now was uttered with a L* accent -- although 6 cue now's were. An even sharper distinction in accent type is found if we separate out those now's which form entire intermediate or intonational phrases from the analysis. (Recall that these tokens are all cue uses. These now's were always accented, since each such phrase must contain at least one pitch accent.) Of the 11 cue phrases representing entire phrases (and for which we can distinguish accent type pre- cisely), 9 bore H* accents. This suggests that one similar- ity between cue and non-cue now .- the frequent H* accent 7. Such as, accenting to indicate contrastive stress or dcaccenting to indicate an item is already salient in the discourse. 8. 2 cue now's were either L* or H* with a compressed pitch range -- might disappear if we limit our comparison to those now's forming part of larger intonational phrases. In fact, such is the ease, as illustrated in Table 4: DEACCENTED H*orCOMPLEX L* NON-CUE 2 15 0 CUE 13 0 5 Table 4. Accenting of Now's in Larger Intonational Phrases A • a i n , these results arc significant at the .001 level, (2)=28.1. The great majority (88.2%) of non-cue now's forming part of larger intonational phrases received a H* or complex pitch accent, while the majority (72.2%) of cue now's forming part of larger intonational phrases were deaccented. Since all other cue now's forming part of larger intonational phrases received a L* accent, only two now's forming part of larger intonational phrases are not distinguishable in terms of accent type -- the two deac- cented non-cue now's. So, those cue now's not distinguish- able from non-cue by being set apart as separate intona- tional phrases were generally so distinguishable in terms of accenting. Since neither of the deaccented non-cue now's appeared at the beginning of an intonational phrase -- as all cue now's did -- all of the instances of now in our sam- ple were in fact distinguishable as cue or non-cue in terms of their position in phrase, phrasal compostion, and accent. We also examined whether cue and non-cue now patterned differently in terms of appearance with other cue phrases, with the following results: ALONE WITHCUE NON-CUE 9 8 CUE 22 9 Table 5. Occurrence with Other Cue Phrases Somewhat counter-intuitively, non-cue now tended to appear more frequently than cue now with other cue phrases -- although generally these other cue phrases were also used in their non-cue sense, e.g., right now. The co~ecurrence is not, however, statistically significant (× (1)=1.6, p > .2), At any rate, the possibility that listeners identify cue now by its co-occurrence with other cue phrases receives no support from our data. Examina- tion of the intonational contour used with phrases contain- ing cue and non-cue now, and of the location of these phrases within speaker turn also produced no significant results. So, we were able to hypothesize from this sample that cue and non-cue now are characterizable in the following ways: 168 Non-cue now forms part of larger intonational phrases and tends to be accented and to receive a It* or complex pitch accent. All non,cue uses in the sample did form part of larger intonational phrases and all but two .- which were deaccented -- were accented with a It* or complex accent. Cue now seems to form two classes: One class is generally set apart as a separate intermediate or intonational phrase. Something under half of our sample fell into this category. The other class, which constituted just over half of our sample, forms part of a larger intonational phrase and is either deaccented or uttered with a L* accent. Both classes share the property of appearing in initial intona- tional phrase position. In summary, non-cue now is always distinct from cue now in our sample in terms of a combination of accent type, position in intonational phrase, and overall composition of the intermediate or intonational phrase. Thus we hypothesize that hearers might be able to distinguish between the two uses of now in three'ways: by noting whether now formed a separate intermediate (or intonational) phrase, by locating now positionally within its intonational phrase, and by identifying the presence or absence of a pitch accent on now and the type of such accent where present. To test the validity of these hypotheses, we replicated our study with a second sample from the same corpus. 4.2 Sample Two For our second sample, we examined the first 52 instances of now taken from another four randomly chosen sides of tapes. 9 This sample included tokens from fifteen speak- ers, with exactly half produced by the host and half by others. I0 This time, six people (including the authors) determined whether instances were cue or non-cue before we analyzed the intonational features. We next examined phrasing and accent used with these tokens to test the hypotheses derived from our first sample. Again, just over one third of our sample (20) were deter- mined to be non-cue and just under two-thirds (32) cue. The striking differences in phrasing noted between cue and non-cue now in sample one were again present in sample two: Again, around 40% (13) of cue now's formed separate intermediate (8) or intonational (5) phrases; only one of the 20 non-cue now's formed a separate intermedi- ate phrase and none a separate intonational phrase. These results were significant at the .005 level -- again strong evidence of association between cue/non-cue status and phrasal composition. When we tested position of now within its intonational phrase in sample two, we again found that cue now generally began the intonational phrase: All but one cue now (this ended its phrase) began 9. We excluded 2 tokens from these tapes because of lack of available information about phrasing or accent and 5 others because our informants were unable to decide whether the now was cue or non-cue. 10.We speak to this issue below. its phrase; again, most (60%) non-cue now's came last in phrase, with two first. These results were significant at the .001 level. Finally, our hypotheses about accent type were also borne out by our second study: The division of all cue and non- cue now's by accent type appears even more pronounced in the second study: Of 20 non-cue now's, 85% of non-cue were H* or complex and the rest deaccented; while of 31 cue now's, 58.1% were deaccented, 19.4% H* or complex, and 22.6% L*. So, while non-cue now's are almost identi- cal to those in the first sample, cue now's are more dis- tinguished here from non-cue. When instances of now forming entire intermediate or intonational phrases are removed.from the second sample, the accenting of cue and non-cue now is even more distinct: All cue now's forming part of a larger phrase are deaccented, while only 15.8% of non-cue now are; the rest of the non-cue now's receive a H* or complex accent (p < .001). So, our second sam- ple confirmed our hypotheses that cue and non-cue now can be differentiated intonationally in terms of position within intonational phrase, composition of intermediate or intonational phrase, and choice of accent. 4.3 Speaker Independence Although our second sample did confirm our initial hypotheses, the preponderance of tokens in both samples from one (professional) speaker might well be of concern. To test this, we compared characteristics of phrasing and accent for host and non-host data over the combined sam- ples (n=lO0). The results showed no significant differ- ences between host and caller tokens in terms of the hypotheses proposed from our first sample and confirmed by our second: First, host (n=37) and callers (n=63) pro- duced cue and non-cue tokens in roughly similar propor- tions -- 40.5% non-cue for the host and 34.9% for his call- ers (p > .5). Similarly, there was no distinction between host and non-host data in terms of choice of accent type, or accenting vs. deaccenting (p > .I). Our hypothesis about the significance of position within intonational phrase holds for both host and non-host data with signifi- cance at the .001 level in each case. However, in ten- dency to set cue now apart as a separate intonational or intermediate phrase, there was an interesting distinction between host and caller: While callers tended to choose from among the two options for cue now in almost equal numbers (48.8% of their cue now's are separate phrases), the host chose this option only 27.3% of the time. While analysis of data for callers and for all speakers shows that the relationship between cue use and separate phrase is significant at the .001 level, this relationship is not significant for the host data. However, although host and caller data differ in the proportion of occurrences of the two classes of cue now which emerge from our data as a whole, the existence of the classes themselves are con- firmed. Where the host did not produce cue now's set apart as separate intonational or intermediate phrases, he always produced cue now's which were deaccented or accented with a L* accent. So, while individual speakers 169 may choose different strategies to realize cue now, they appear to choose from among the same limited number of options. In sum, the hypotheses proposed on the basis of our first sample are borne out by our analysis of the second -- and remain significant even when we eliminate the host from our sample. 4.4 Distinguishing Cue and Non-Cue Usage in Text Our conclusion from this study that intonational features play a crucial role in the distinction between cue and non- cue usage in speech clearly poses problems for text. Do readers use strategies different from hearers to make this distinction, and, ff so, what might they be? Are there perhaps orthographic correlates of the intonational features which we have found to be important in speech? As a first step toward resolving these questions, we examined the orthographic features of the transcripts of our corpus (which were prepared without particular consideration of intonational features) and made a preliminary examination of two sets of typescript interactions. We examined transcriptions of all tokens of now in both our samples to determine whether phrasing was indicated orthographicaUy. II Of all those instances of now (n--60) that were absolutely first in their intonational phrase, 56.7% (34) were preceded by punctuation -- a comma, dash, or end punctuation. 28.3% (17) were first in speaker turn, and thus othographicaUy 'marked' by indica- tion of speaker name. It should be noted that these units so distinguished were not necessarily syntactically well- formed units. So, in 85% (51) of cases, first position in intonational phrase was marked in the transcription ortho- graphically. No now's that were not absolutely first in. their intonational phrase (in particular, none that were merely first in intermediate phrase) were so marked. Of those 23 now's coming last in an intermediate or intona- tional phrase, however, only 60.9% (14) are immediately followed by a similar orthographic clue. Finally, of the 13 instances of now which formed separate intonational phrases, only 2 were so marked orthographically -- by being both preceded and followed by some punctuation. None of the now's forming only complete intermediate phrases were so marked. These findings suggest that only the intonational feature 'first in intonational phrase' has any clear orthographic correlate. However, since this feature does characterize 90.1% of the 63 cue now's in our spoken data (merging both samples) -- and since 85.0% of these cue now's are also orthographically marked for position as well (so that 80.1% of cue now's can be orthographically distinguished) -- it seems that this correlation between intonation and orthography may be a useful one to pursue. It is also pos- sible that a perusal of text, rather than transcribed speech, might indicate more orthographic clues to cue/non-cue disambiguation. We are currently examining two sets of 11.No instances of capitalization or other othographic marking of nuclear stress appear in any of the transcripts. typescripts 12 of task-oriented text interactions. 5. Conclusions Our study of the cue phrase now strongly suggests that speakers and hearers can distinguish between cue and non-cue uses of cue phrases intonationaUy, by making or noting differences in accent and phrasing. Cue and non- cue now in our samples are reliably distinguished in terms of whether now forms a separate intermediate or intona- tional phrase, whether it occurs first in its intonational phrase, and whether it is accented or not -- and, if accented, the type of accent it bears. In the absence of akernate known means of distinction between cue and non-cue use, we propose that speakers and hearers do dif- ferentiate intonationally. Our next step is to extend our study to other cue phrases, including anywm), well, first, and right. We also plan to examine the relationship between cue usage and pitch range manipulation [7], another indicator of discourse structure. The goal of our research is both to provide new sources of linguistic infor- mation for work in plan inference and discourse under- standing, and to permit more sophisticated use of intona- tional variation in synthetic speech. Acknowledgements Thanks to Janet Pierrchumbert and Jan van Santen for help in data analysis, to Don Hindle, Mats Rooth, and Kim Silverman for providing judgements, and to David Etherington, Osamu Fujimura, Brad Goodman, Kathy McCoy, Martha Pollack, and the ACL reviewers for their helpful comments on an earlier draft of this paper. 12. Ethel Schuster's transcripts of students being tutored in EMACS [19] and transcripts of people assembling a water pump 13] 170 REFERENCES 1. Brazil, D., Coulthard, M., and Johns, C. Discourse intonation and language teaching. Long- man, London, 1980. 2. Butterworth, B. Hesitation and semantic planning in speech. Journal of Psycholinguistic Research 4 (1975), 75-87. 3. Cohen, P., Fertig, S., and Start, K. Dependencies of discourse structure on the modality of communi- cation: telephone vs. teletype. In Proceedings of the ACL, ACL, Toronto, 1982, pp. 28-35. 4. Cohen, R. A computational theory of the function of clue words in argument understanding. In Proceedings of COLING84, COLING, Stanford, 1984, pp. 251-255. 5. Grosz, B. and Sidner, C. Attention, intentions, and the structure of discourse. Computational Linguistics 12, 3 (1986), 175-204. 6. Grosz, B.J. The Representation and use of focus in dialogue understanding. 151, SRI International, 1977. University of California at Berkeley PhD Thesis. 7. Hirschberg, L and Pierrehumbert, J. The intona- tional structuring of discourse. In Proceedings of the 24:h Annual Meeting, Association for Computa- tional Linguistics, New York, 1986, pp. 136-1¢4. 8. Hobbs, J. Coherence and coreference. Cognitive Science 3, 1 (1979), 67-90. 9. Liberman, M. and Pierrehumbert, J. Intonational invariants under changes in pitch range and length. In Language sound structure, M. Aronoff and R. Oehrle, Eds. MIT Press, Cambridge, 1984. 10. Litman, D. and Allen, J. A Plan recognition. model for subdialogues in conversation. Cognitive Science 11 (1987), 163-200. 11. Mann, W.C. and Thompson, S.A. Relational Pro- positions in Discourse. ISI/RR-83-115, ISI/USC, November 1983. 12. 0live, LP. and Liberman, M.Y. Text to speech -- An overview. Journal of the Acoustic Society of America, Suppl. 1 78, Fall (1985), s6. 13. Pierrehumbert, I.B. The phonology and phonetics of English intonation. PhD Thesis, Massachusetts Institute of Technology, 1980. 14. Polanyi, L. and Scha, R. A Syntactic approach to discourse semantics. In Proceedings of COLING84, COLING, Stanford, 1984, pp. 413-419. 15. Pollack, M.E., Hirschberg, J., and Webber, B. User Participation in the Reasoning Processes of Expert Systems. MS-CIS-82-9, University of Pennsylvania, 1982. A shorter version appears in the AAAI Proceedings, 1982. 16. Reichman, R. Getting computers to talk like you and me: discourse context, focus, and semantics. MIT Press, Cambridge MA, 1985. 17. Schlegoff, E.A. The relevance of repair to syntax- for-conversation. In Syntax and semantics, 12: Discourse and syntax, T. Givon, Ed. Academic, New York, 1979, pp. 261-288. 18. Schourup, L. Common discourse particles in English conversation. Garland, New York, 1985. 19. Schuster, E. Explaining and Expounding. MS- CIS-82-49, University of Pennsylvania, 1982. 20. Silverman, K. Natural prosody for synthetic speech. PhD Thesis, Cambridge University, 1987. 21. Zukerman, I. and Pearl, J. Comprehension-driven generation of recta-technical utterances in math tutoring. In Proceedings of the 5th National Confer- ence, AAAI86, Philadelphia, 1986, pp. 606-611. t. 171
1987
23
On the Acquisition of Lexical Entries: The Perceptual Origin of Thematic Relations James Pustejovsky Department of Computer Science Brandeis University Waltham, MA 02254 617-736-2709 jamesp~br andeis.csnet-relay Abstract This paper describes a computational model of concept acquisition for natural language. We develop a theory of lexical semantics, the Eztended Aspect Calculus, which together with a ~maxkedness theory" for thematic rela- tions, constrains what a possible word meaning can be. This is based on the supposition that predicates from the perceptual domain axe the primitives for more abstract relations. We then describe an implementation of this model, TULLY, which mirrors the stages of lexical acqui- sition for children. I. Introduction In this paper we describe a computational model of con- cept acquisition for natural language making use of po- sitive-only data, modelled on a theory of lexical seman- tics. This theory, the Eztende~t Aspect Calculus acts to- gether with a maxkedness theory for thematic roles to constrain what a possible word type is, just as a gram- mar defines what a well-formed tree structure is in syntax. We argue that linguistic specific knowledge and learning principles are needed for concept acquisition from positive evidence alone: Furthermore, this model posits a close in- teraction between the predicates of visual perception and the early semantic interpretation of thematic roles as used in linguistic expressions. In fact, we claim that these re- lations act as constraints to the development of predicate hierachies in language acquisition. Finally, we describe TULLY, an implementation of this model in ZETALXSP and discuss its design in the context of machine learning research. There has been little work on the acquisition of thematic relation and case roles, due to the absence of any consensus on their formal properties. In this research we begin to address what a theory of thematic relations might look like, using learnabUity theory as a metric for evaluating the model. We claim that there is an impor- tant relationship between visual or imagistic perception and the development of thematic relations in linguistic us- age for a child. This has been argued recently by Jackend- off (1983, 1985) and was an assumption in the pioneering work of Miller and Johnson-Laird (1976). Here we argue that the conceptual abstraction of thematic information does not develop arbitrarily but along a given, predictable path; namely, a developmental path that starts with tan- gible perceptual predicates (e.g. spatial, causative) to later form the more abstract mental and cognitive predi- cates. In this view thematic relations are actually sets of thematic properties, related by a partial ordering. This effectively establishes a maxkedness theory for thematic roles that a learning system must adhere to in the acqui- sition of lexical entries for a larlguage. We will discuss two computational methods for concept development in natural language: (1) F~ature Relaxation of particular features of the ar- guments to a verb. This is performed by a con- straint propagation method. (2) Thematic Decoupling of semantically incorporated information from the verb. When these two learning techniques are combined with the model of lexical semantics adopted here, the stages of development for verb acquisition are similar to those acknowledged for child language acquisition. 2. Learnabillty Theory and Concept De- velopment Work in machine learning has shown the useful- ness to an inductive concept-learning system of inducing "bias" in the learning process (cf. [Mitchell 1977, 1978], [Michalski 1983]). An even more promising development is the move to base the bias on domain-intensive models, as seen in [Mitchell et al. 1985], [Utgoff 1985], and [Win- ston et al. 1983 I. This is an important direction for those concerned with natural language acquisition, as it con- verges with a long-held belief of many psychologists and linguists that domain-specific information is necessary for learning (cf. [Slobin 1982], [Pinker 1984], {Bowerman 1974], [Chomsky 1980]). Indeed, Berwick (1984) moves in exactly this direction. Berwick describes a model for the acquisition of syntactic knowledge based on a restricted X-syntactic parser, a modification of the Marcus parser ([Marcus 1980]). The domain knowledge specified to the system in this case is a parametric parser and learning system that adapts to a particular linguistic environment, given only positive data. This is just the sort of biasing necessary to account for data on syntactic acquisition. 172 One area of language acquisition that has not been sufficiently addressed within computational models is the acquisition of conceptual structure. For language acquisi- tion, the problem can be stated as follows: How does the child identify a particular thematic role with a specific grammatical function in the sentence? This is the prob- lem of mapping the semantic functions of a proposition into specified syntactic positions in a sentence. Pinker (1984) makes an interesting suggestion (due originally to D. Lebeaux) in answer to this question. He proposes that one of the strategies available to the lan- guage learner involves a sort of ~template matching" of argument to syntactic position. There are canonical con- j~gurat{orts that are the default mappings and non-cano- nicoJ mappings for the exceptions. For example, the tem- plate consists of two rows, one of thematic roles, and the other of syntactic positions. A canonical mapping exists if no lines joining the two rows cross. Figure 1 shows a canonical mapping representing the sentence in (1), while Figure 2 illustrates a noncanonical mapping representing sentence (2). 0-roles: ~~L Syntactic roles: SUBJ OBJ OBL Figure 1 e-roles: A Th G/S/L Syntactic r o ~ O BL Figure 2 (1) Mary hit Bill. (2) Bill was hit by Mary. With this principle we can represent the productivity of verb forms that are used but not heard by the child. We will adopt a modified version of the canonical mapping strategy for our system, and embed it within a theory of how perceptual primitives help derive linguistic concepts. As mentioned, one of the motivations for adopt- ing the canonical mapping principle is the power it gives a learning system in the face of positive-only data. In terms of learnability theory, Berwick (1985) (following [Angluin 1978]) notes that to ensure successful acquisi- tion of the language after a finite number of positive ex- amples, something llke the Subset Principle is necessary. We can compare this principle to a Version Space model of inductive learning( [Mitchell 1977, 1978]), with no neg- ative instances. Generalization proceeds in a conservative fashion, taking only the narrowest concept that covers the data. How does this principle relate to lexical seman- tics and the way thematic relations are mapped to syn- tactic positions? We claim that the connection is very direct. Concept learning begins with spatial, temporal, and causal predicates being the most salient. This follows from our supposition that these are innate structures, or are learned very early. Following Miller and Johnson- Laird (1976), [Miller 1985], and most psychologists, we assume the prelinguistic child is already able to discern spatial orientations, causation, and temporal dependen- cies. We take this as a point of departure for our theory of markedness, which is developed in the next section. 3.0 Theoretical Assumptions 3.1 The Extended Aspect Calculus In this section we outline the semantic framework which defines our domain for lexical acquisition. In the current linguistic literature on case roles or thematic re- lations, there is little discussion on what logical connec- tion exists between one e-role and another. Besides being the workhorse for motivating several principles of syn- tax (cf. [Chomsky 1981], [Willi~ms 1980]) the most that is claimed is that Universal Grammar specifies a reper- toire of thematic relations (or case roles), Agent, Theme, Patient, Goal, Source, Instrument, and that every NP must carry one and only one role. It should be remem- bered, however, that thematic relations were originally conceived in terms of the argument positions of seman- tic predicates such as CAUSE and DO. * That is a verb didn't simply have a list of labelled arguments 2 such as Agent and Patient, but had an interpretation in terms of more primitive predicates where the notions Agent and Patient were defined. The causer of an event (following Jackendoff (1976)) is defined as an Agent, for example, c ,4u s E(=, ,) -. Ag,.~(=). Similarly, the first argument position of the pred- icate GO is interpreted as Theme, as in GO(=,y,z). The second argument here is the SOURCE and the third is called the GOAL. The model we have in mind acts to constrain the space of possible word meanings. In this sense it is similar to Dowty's aspect calculus but goes beyond it in embed- ding his model within a markedness theory for thematic types. Our model is a first-order logic that employs sym- bols acting as special operators over the standard logical vocabulary. These are taken from three distinct semantic fields. They are: causal, spatial, and aspectual. The predicates associated with the causal field are Cau~e, (C,), C~se~ (C2), and l.stru,ne.t (I). The spatial field has only one predicate, Locatiue, which is predicated of an object we term the Th~me. Finally, the aspectual i CfiJackendoff (1972, 1976) for a detailed elaboration of this theory. 2 This is now roughly the common assumption in GB, GPSG, and LFG. 173 field has three predicates, representing the three temporal intervals t~, beginning, t2, middle, and t3, end. From the interaction of these predicates all thematic types can be derived. We call the lexical specification for this aspectual and thematic information the Thematic Mapping Indez. As an example of how these components work to- gether to define a thematic type, consider first the dis- tinction between a state, an activity (or process), and an accomplishment. A state can be thought of as reference to an unbounded interval, which we will simply call t2; that is, the state spans this interval. 3 An activity or pro- tess can be thought of as referring to a designated initial point and the ensuing process; in other words, the situa- tion spans the two intervals tt and t2. Finally, an event can be viewed as referring to both an activity and a des- ignated terminating interval; that is, the event spans all three intervals, it, t2, and is, Now consider how these bindings interact with the other semantic fields for the verb run in sentence (8) and give in sentence (9). (8) John ran yesterday. (9) John gave the book to Mary. We associate with the verb run an argument structure of simply rim(=}. For give we associate the argument struc- ture ~v,(=, v, =). The Thematic Mapping Index for each is given below in (10) and (11). 00) L/!, (11) Th t ,!) tt t 2 The sentence in (8) represents a process with no logical culmination, and the one argument is linked to the named case role, Theme. The entire process is associated with both the initial interval t~ and the middle interval t2. The argument = is linked to C~ as well, indicating that it is an Actor as well as a moving object (i.e. Theme). This represents one TMI for an activity verb. The structure in (9) specifies that the meaning of give carries with it the supposition that there is a logical This is a simplication of our model, but for our purposes the difference is moot. A state is actually inter- preted as a primitive homogeneous event-sequence, with downward closure. Cf. [Pustejovsky, 1987], 4 [Jacl~endoff tOSS] develops a similar idea, but vide in/ra for discussion. culmination to the process of giving. This is captured by reference to the final subinterval, is. The linking between = and the L associated with tt is interpreted as Source, while the other linked arguments, y and z are Theme (the book) and Goa/, respectively. Furthermore, = is specified as a Causer and the object which is marked Theme is also an affected object (i.e. Patient). This will be one of the TMIs for an accomplishment. In these examples the three subsystems are shown as rows, and the configuration given is lexically specified. 4 3.2 A Markedness Theory for Thematic Roles As mentioned above, the theory we are outlining here is grounded on the supposition that all relations in the language are suffiently described in terms of causal, spatial and aspectual predicates. A thematic role in this view is seen as a set of primitive properties relating to the predicates mentioned above. The relationship between these thematic roles is a partial ordering over the sets of properties defining them. It is this partial ordering that allows us to define a markedness theory for thematic roles. Why is this important? If thematic roles are assigned randomly to a verb, then one would expect that there exist verbs that have only Patient or Instrument, or two Agents or Themes, for example. Yet this is not what we find. What appears to be the case is that thematic roles are not assigned to a verb independently of one another, but rather that some thematic roles are fixed only after other roles have been established. For example, a verb will not be assigned a GOAL if there is not a THEME assigned first. Similarly, a LOCATIVE is dependent on there being a THEME present. This dependency can be viewed as an acquisition strategy for learning the thematic relations of a verb. Now let us outline the theory. We begin by estab- lishing the most unmarked relation that an argument can bear to its predicate. Let us call this role Them,~. The only semantic information this carries is that of an exis- tential quantifier. It is the only named role outside of the three interpretive systems defined above. Normally, we think of Them, as an object in motion. This is only half correct, however, since statives carry a Theme readings as well. It is in fact the feature [±motion] that distinguishes the role of Mary in (1) and (2) below. (1) Stative: l-motion I Mary sleeps. (2) Active: [+motion] Mary fell. This gives us our first markedness convention: (3) Therr=ee--Theme.~/[+motion] (3) Themery-..Themes/[-motior=] 174 where ThemeA is an "activity" Theme, and Themes is a stative. Within the spatial subsystem, there is one variable type, Location, and a finite set of them L1, L~... L~. The most unmarked location is that carrying no specific aspec- tual binding. That is, the named variables are Ls and Lz and are commonly referred to as Source and Goal. Thus, Lu is the unmarked role. The limitations on named loca- tive variables is perhaps constrained only by the aspectual system of the language (rich aspectual distinction, then more named locative variables). The markedness conven- tions here are: (4) Lu -* S/B (s) L~ -- C/E Within the causal subsystem there are three pred- icates, Cl, C2, and I. We call C2, (the traditional Patient role) is less marked than c~, but is more marked than I. These conventions give us the core of the primitive semantic relations. To be able to perform predicate gen- eralization over each relation, however, we define a set of features that applies to each argument within the seman- tic subsystems. These are the abstraction operators that allow a perceptual-based semantics to generalize to non- perceptual relations. These features also have marked and unmarked values, as we will show below. There are four features that contribute to the generalization process in concept acquisition: (a) l±~b,tra,t] (b) [+d~r,~t] (c) [±,o,.pl,t,] (d) [±.~i~t,] The first feature, abttract, distinguishes tangible objects from intangible ones. Direct will allow a gradi- ence in the notion of causation and motion. The third feature, cornplete, picks out the extension of an argument as either an entire object or only part of it. Ani~v~ac~l has the standard semantics of labeling an object as alive or not. Let us illustrate how these operators abstract over primitive thematic roles. By changing the value of a fea- ture, we can alter the description, and hence, the set of objects in its extension. Assume, for example, that the predicate C1 has as its unmarked value, [+Direct]. (6) C,[UDir,,tl --[+Vir,ctl By changing the value of this feature we allow CI, the direct agent of an event, to refer to an indirect causer. (7) Ae,.t[+D~rect I <@ Aee,~tl-Dir,ct ] Similarly, we can change the value of the default setting for the feature I+Complet~] to refer to a subcausation (or causation by part). (8) Agent{+CompleU] <~ Agent[-CompleteJ These changes define a new concept, "effector', which is a superset of the previous concepts given in the system. The same can be done with C'~ to arrive at the concept of an "effected object." We see the difference in interpreta- tion in the sentences below. a. John intentionally broke the chair. (Agent-direct) b. John accidentally broke that chair when he sat down. (Agent-indirect) c. John broke the chair when he fell. (Effector) Given the manner in which the features of primi- tive thematic roles are able to change their values, we are defining a predictable generalization path that relations incorporating these roles will take. In other words, two concepts may be related thematically, but may have very different extensional properties. For example, give and take are clearly definable perceptual transfer relations. But given the abstractions available from our marked- ness theory, they are thematically related to something as distant as "experiencer verbs", e.g. please, as in "The book pleased John." This relation is a transfer verb with an incorporated Theme; namely, the "pleasure." s If we apply these features in the spatial subsystem, we can arrive at generalized notions of location, as well as abstracted interpretations for Theme, Goal and Source. For example, given the thematic role Th - A with the fea- ture [-Abstract] in the default setting, we can generalize to allow for abstract relations such as like, where the ob- ject is not affected, but is an abstract Theme. Similarly, the Theme in a sentence such as (a) can be concrete and direct, or abstract, as in (b). (a) have(L, rh) Mary has a book. (b) have(L, Yh) Mary has a problem with Bill. In conclusion, we can give the following dependencies be- tween thematic roles: {r~eme} {~} {s, c} {c,} s Cf. Pustejovsky (1987) for an explanation of this term and a full discussion of the extended aspect calculus. 175 The generaliztion features apply to this structure to build hierarchical structures (Cf. {Keil 1979], [Kodratoff 1986]). This partial ordering allows us to define a notion of cov- crs'ng, as with a semi-lattice, from which a strong princi- ple of functional uniqueness is derivable (of. [Jackendoff 1985]). The mapping of a thematic role to an argument follows the following principle: (9) Maximal Assignment Principle An argument will receive the maximal interpretation consistent with the data. This says two things. First, it says that an Agent, for example, will always have a location and theme role as- sociated with it. Furthermore, an Agent may be affected by its action, and hence be a Patient as well. Secondly, this principle says that although an argument may bear many thematic roles, the grammar picks out that function which is mazimall!; specific in its interpretation, accord- ing to the markedness theory. Thus, the two arguments might be Themes in "John chased Mary", but the the- matic roles which maximally characterize their functions in the sentence are A and P, respectively. 4. The Learning Component 4.1 The Form of the Input The input is a data structure pair; an event se- quence expression and a sentence describing the event. The event-sequence is a simulated output from a middle- level vision system where motion detection from the low- level input has already been associated with particular object types. 6 The event-sequence consists of three instantaneous descriptions (IDa) of a situation represented as intervals. These correspond to the intervals t~, t2, and ts in the aspect calculus. The predicates are perceptual primi- tives, such as those described in Miller and Johnson- Laird (1976) and Maddox and Pustejovsky (1987), such as [Ar(t~, ~) ~ ~ = [O,V(,,, d t, ,4,,,,,.~t,(,,) ~, Mo,,,~(~,) ~, ...]]. The second object is a linguistic expression (i.e. a sen- tence), parsed by a simple finite state transducer. ~ s For a detailed discussion of how the visual processing and linguistic systems interact, cf. Maddox and Pustejovsky (1987). We are not addressing any complex interaction between syntactic and semantic acquisition in this system. Ideally, we would like to integrate the concept acquisition mechanisms here with a parser such as Berwick's, Cf. Berwick 1985. 4.2 The Acquisition Procedure We now turn to the design of the learning program itself. TULLY can be characterized as a domain-intensive inductive learning system, where the generalizations pos- sible in the system are restricted by the architecture im- posed by the semantic model. We can separate clearly what is given from what is learned in the system, as shown in Figure 1. GIVEN Extended Aspect Calculus 0-Markedness Theory Canonical Mapping Rule Execution Loop ACQUIRED Verbal Lexical semantics Argument-function mapping Predication Hierarchy Figure 1 In order to better understand the learning mecha- nism, we will step through an example run of the system. First, however, we will give the rule execution loop which the system follows. Rule Execution Loop 1. Instantiate Existing Thematic Indexes INSTANTIATE: Attempt to do a semantic analy- sis of word given using existing Thematic Mapping Indexes. If the analysis fails then go to 2. 2. Concept.acquisition phase. Note failure: Credit assignment. Link arguments to roles according to Canonical Mapping. 3. Build new Thematic Mapping Index LINK and SHIFT: Constructs new index accord- ing to the Extended Aspect Calculus using infor- mation from credit assignment in (2). If this fails then go to (4). 4. Invoke Noncanonical Mapping Principle. If (3) fails to build a mapping for the lexical item in the input, then the rule INTERSECT is invoked. This allows the lines to cross from any of the in- terpretive levels to the argument tier. 5. Generalization Step. This is where the markedness theory is invoked. Induction follows the restrictions in the theory, where generalization is limited to one of the stated types. 176 Assume that the first input to the system is the sentence ~Mary hit the cat," with its accompanying event sequence expression, represented as a situation calculus expression. INSTANTIATE attempts to map an exist- ing Thematic Mapping [ndez onto the input, but fails. Stage (2) is entered by the failure of (1), and credit as- signment indicates where it failed. Heuristics will indicate which thematic properties are associated with each argu- ment, and stage (3) links the arguments with the proper roles, according to Canonical Mapping. This links Mary to Agent and the cat to Patient. One important point to make here is that any information from the perceptual expression that is not grammatically expressed will automatically be assumed to be part of the verb meaning itself. In this case, the instrument of the hitting (e.g. Mary's arm) is covered by the lexical semantics of hit. There are two forms of generalization performed by the system in step (5): constraint propagation and thematic decoupling. In a propagation procedure (Cf. [Waltz, 1975]), the computation is described as operat- ing locall!/, since the change has local consistency. To illustrate, consider the verb entry for have, as in (1), (I) John has a book. have(z =/;, y = Th) where the object carries the feature [-abstract]. Now, con- sider how the sense of the verb changes with a feature change to [~abetract], as in (2). (2) John has an idea. In other words, there is a propagation of this feature to the subject, where the sense of locative becomes more abstract, e.g. menta/. These types of extensions give rise to other verbs with the same thematic mapping, but with ~relaxed" interpretations. * The other strategy employed here is that of the- matic decoupling, where thematic information becomes disassociated from the lexical semantics for a verb. ' The narrower interpretation of a verb's meaning will be arrived at after enough training instances are given; for example, from cut as meaning a particular action with a knife, to cut as an action that results in a certain state. It is interesting to speculate on how these strate- gies facilitate the development from perceptual relations to more abstract ones. The verb tell, for example, can be viewed as a transfer verb with a [+abstract] Theme, and the accompanying contraint propagation (Cf. [Pinker, 1984] and [Jackendoff, 1983]). Similarly, experiencer verbs such as please, upset, and anger can be seen as combining both strategies: they are similar to transfer verbs, but with lea- s For further discussion of constraint propagation as a learning strategy, cf. Pustejovsky (1987b). 9 Results given in Nygren (1977) indicate that chil- dren have fully incorporated instruments for verbs such as hammer, cut, and saw, and only at a later.age do they abstract to a verb sense without a particular and constant instrument interpretation. ture relaxation on the Theme, together with propagated constraints to the Source and Goal (the subject and ob- ject, respectively); the difference is that the Theme is incorporated said is not grammatically expressed. John pleased his mother. please(z ~ ~, y ffi G, Th : incorporated) Conclusions In this paper we have outlined a theory of acquisi- tion for the semantic roles associated with verbs. Specifi- cally, we argue that perceptual predicates form the foun- dation for later conceptual development in language, and propose a specific algorithm for learning employing a the- ory of markedness for thematic types and the two strate- gies of thematic decoupling and constraint relazation and propagation. The approach sketched above will doubtless need revision and refinement on particular points, but is claimed to offer a new perspective which can contribute to the solution of some long-standing puzzles in acquisition. Acknowledgements I would like to thank Sabine Bergler who did the first implementation of the algorithm, as well as Anthony Maddox, John Brolio, Ken Wexler, Mellissa Bowermxn, and Edwin Williams for useful discussion. All faults and errors are of course my own. References [I] Angluin, D. "Inductive Inference of formal Lan- guages from positive data." In[ormation and Con- trol 45:117-135. [2] Berwick, Robert C. The Acquisition of Syntactic Information, MIT Press, Cambridge, MA. 1985. [3] Berwick, Robert C., "Learning from Positive-Only Examples: The Subset Principle and Three Case Studies," in Michalski et al, 1986. [4] Bowerman, Mellissa "Learning the Structure of Cau satire Verbs," in Clark (ed) Papers and reports on child language development, No. 8, Stanford Uni- versity Committee on Linguistics. 1974 [5] Chomsky, Noam Rules and Representation, Colum- bia University Press, 1980 [6] Chomsky, Noam Lectures on Government and Bind- ing, Foris, Holland, 1981. [7] Dowry, David R., Word Meaning and Montague Grammar, D. Reidel, Dordrecht, Holland, 1979. [8] Jackendoff , Ray, Language and Cognition, MIT Press, Cambridge, MA. 1983. [9] Jackendoff, Ray, ~The Role of Thematic Relations in Linguistic Theory,", ms. Brandeis University, 1985 177 [I0] Kodratoff, Yves, and J-G. Ganascia, "Improving the Generalization Step in Learning", in Michal- skiet el (eds.), Machine Learning II, Morgan Kauf- mann, [11] Marcus, Mltch, A Theory of Syntactic Recogni- tion for Natural Language, MIT Press, Cambridge, 1980 [12] Michalski, R.S., "A Theory and Methodology of Inductive Learning,", in Michalski et al (eds.), Ma- chins Learning L [13] Miller, George, "Dictionaries of the Mind" in Pro- ceedings of the 23rd Annual Meeting of the As- sociation for Computational Linguistics, Chicago, 1985. [14] Miller, George and Philip Johnson-Laird, Language and Perception, Belknap, Harvard University Press, Cambridge, MA. 1976. [15] Mitchell, Tom, "Version Spaces: A Candidate Elim- ination Approach to Rule Learning," in IJCAI-77, 1977 [16] Mitchell, Tom, Version Spaces: An Approach to Concept Learning, Ph.D. thesis Stanford, 1978. [17] Nygren, Carolyn, "Results of Experiments with In- strumentals," ms. UMASS, Amherst, MA. [18] Pilato, Samuel F. and Robert C. Berwick, "Re- versible Automata and Induction of the English Auxiliary System", in Proceedings of the 23rd An- num Meeting of the Association for Computational Linguistics, Chicago, 1985. [19] Pinker, Steven, Lan#uage Learnability and Lan- guage D~velopmcnt, Harvard University Press, Cam bridge, 1984 [20] Pustejovsky, James, "A Theory of Lexical Seman- tics for Concept Acqusition in Natural Language", to appear in /n~ernatioaa/Journal of Intelligent Systems [21] Pustejovsky, James and Sabine Bergler, "On the Acquisition of the Conceptual Lexicon", paper sub- mitted to AAAI-1987, Seattle, WA. [22] Slobin , D. "Universals and Particulars in Lan- guage Acqusition", in Gleitmann, Language Ac- quisition, Cambridge, 1982 [23] Waltz, David "Understanding line drawings of sce- nces with shadows," in The Psychology of Com- puter Vision, P. Winston ed. New York, McGraw- Hill, pp. 19-92. [24] Waltz, David "Event Space Descriptions," Pro- ceedings of the AAAI-82, 1982 [25] Williams, Edwin, "Predication", Linguistic Inquiry, 1980 [26] Winston, Patrick H., "Learning by Augmenting Rules and Accumulating Censors," in Michalski et al, 1986. [27] Winston, Patrick, Binford, Katz, and Lowry, "Learn ing Physical Descriptions from Functional Defini- tions, Examples, and Precedents, Proceedings of AAAI, Washington, 1983 178
1987
24
THE LOGICAL ANALYSIS OF LEXICAL AMBIGUITY Abstract Theodes of semantic interpretation which wish to capture as many generalizations as possible must face up to the manifoldly ambiguous and contextually dependent nature of word meaning? In this paper I present a two-level scheme of semantic interpretation in which the first level deals with the semantic con- sequences of'syntactic structure and the second with the choice of word meaning. On the first level the meanings of ambiguous words, pronominal references, nominal compounds and metonomies are not treated as fixed, but are instead represented by free variables which range over predicates and func- tions. The context-dependence of lexical meaning is dealt with by the second level, a constraint propaga- tion process which attempts to assign values to these variables on the basis of the logical coherence of the overall result. In so doing it makes use of a set of polysemy operators which map between lexical senses, thus making a potentially indefinite number of related senses available. 1 INTRODUCTION: LEXICAL ASSOCIATION IN A COMPOSITIONAL SEMANTICS A tenet now held with some force among formal semanticists is that the meaning of a complex natural- language expression should be a function of just two things: the meanings of the parts of the expression and the syntactic rule used to form the expression out of those parts. Systems such as Montague Grammar [9] give phrases like "former senator" compositional treatments by first translating them to an expression of intensional logic, and then giving this expression a model-theoretic interpretation in the usual way. The practical relevance of this goal to work in natural lan- guage processing is clear: for any application domain, maximum coverage could be obtained from the same domain-independent set of rules, needing only to add the relevant entries, with their primitive associations of meaning, to the lexicon. ~The work presented here was supported under OARPA contract #N00014-85-C-0016. The views and conclusJons contained in this document are those of the author and should not be inte~reted as necessarily representing the official policies, either expressed or implied, of the Defense Aclvance~ Research Projects Agency or the United States Government. David Stallard, BBN Laboratories Inc. 10 Moulton St., Cambridge, Mass. 02238 An obvious technical issue for this program is raised by the phenomenon of lexical ambiguity. This problem is not one that has been particularly ad- dressed in the Montague Grammar literature. The most obvious approach is simply to make alternative lexical senses separate entries in the lexicon, and to allow these disambiguated lexicat items to give rise to separate syntactic and semantic analyses. The com- putationally unattractive consequences of this are quite clear: the same work must be done over again for each variant. An alternative class of proposals defers the lexical part of the analysis until the rest is done. Hobbs [5] has presented the most detailed general treatment of this type to date. This treatment simply associates each ambiguous lexical item with the logical disjunc- tion of its separate senses. Standard reasoning tech- niques (such as theorem proving) can then be ap- plied. The problem with this approach is that it is simply not correct. This may be most straightfor- wardly seen in yes/no questions that contain an am- biguity. For example, suppose the the ambiguous verb "have" is to be treated as the disjunction of the predicates POSSESS, PART-OF, etc. Then the answer to the question "Does the butcher have kidneys?" must always come out "yes", because the second alternative is (assumably) true regardless. This method goes wrong because the issue in resolv- ing ambiguity is determining which possibility was in- tended, not which possibility is true. A more correct approach is due to Landsbergen and Scha [8] and implemented in the PHLIQA1 sys- tem. There, the result of semantic interpretation is an expression of an ambiguous logical language called EFL (for English-oriented Formal Language). During semantic interpretation each lexeme is assigned to one (possibly ambiguous) descriptive constant of that language, which is later mapped, via local translation rules, to one or more expressions of an unambiguous logical language called WML (for World Model Language). The result is a set of complete WML translations of the entire EFL expression, from which sortally anomalous alternatives are subsequently eliminated. The PHLIQA1 system, while handling homonymy acceptably, does not address the problem of polyserny . the presence of an indefinite number of related senses for a single word. Consider the polysemous lexeme "mouth", which is used differently 179 in the phrases "mouth of a person", "mouth of a bottle", "mouth of a river", and "mouth of a cave". Surely the same logical relationship is not involved in each of these cases. Generalizing the meaning of the word will not help either, for if we tried to re-define "mouth" to mean just any aperture, we would lose our ability to refer to human "mouths" independently of other parts of the body. Enumerating these separate senses with separate translation rules does not look like a very promising approach either, since it is not at all clear that the list above could not be continued indefinitely. The problem with such an approach to meaning seems to be that it is too discrete: in linguis- tic terms, it does not "capture a generalization". This paper presents a method of dealing with lex- ical meanings which does seek to capture the generalizations implicit in polysemy. The complexes of meanings associated with polysemous lexical items are generated, structured and extended by a kind of "grammar" of word meaning: a set of operators which take descriptive constants of a meaning represen- tation language onto other descriptive constants or expressions of that language. These operations in- clude not only metaphorical and metonymic extension of the word sense, but "broadening", which allows a word to refer to a wider class of items than before; "exclusion", which removes from the denotation of a word the members of a particular subset thereof; "narrowing", which narrows the denotation down to a particular subset. Each word is assumed to have a core sense (or in the case that it is homonymous, several core senses) from which extended senses can be derived by recursive application of the operators. Related to the issue of lexical ambiguity, if tradi- tionally studied apart from it. are the problems raised by nominal compounds and metonymies. Here the problem is determining the binary relation which has been "elided" from the utterance. This could in prin- ciple be any relation; a translation rule approach can- not help here. Novel metaphorical uses of a word, such as the substitution of an individual for a whole class, will also escape such an approach. The point about all three of these phenomena is that they es- sentially create new lexical senses. The productive- ness of this process suggests that the established senses of polysemous lexemes may be generated in the same way. A key innovation of this work is to treat every non- logical word as being potentially ambiguous. Thus semantic interpretation initially assigns to each such lexical item not an ambiguous constant, but a free variable capable of ranging over the appropriate type of a higher-order intensional logic [4]. These free vari- ables are restricted to range not over an explicitly enumerated set of logical expressions, but over a potentially infinite set of them which is recursively enumerable by the polysemy operators. Obviously, the core sense itself (and other established senses) are not excluded as candidates. A separate con- straint propagation stage then assigns appropriate descriptive constant values to these variables based on the sortal coherence of the whole expression. This two-stage method of semantic interpretation will be seen to have an advantage over one not dis- cussed so far: a single stage method which not does not allot a separate role to lexical semantics or pay close attention to compositionality, but rather seeks to interpret distinct patterns like "mouth of a cave" as a whole. Besides suffering from the same lack of generality criticised above this latter method en- counters difficulty when an ambiguous word-form and a pronoun or trace are combined together. A second constraint propagation stage enables the dependence of word meaning on context - specifically, on the meanings of other words and the referents of anaphors and deixis in the utterance - to be captured. The computational effect is that search can be cut down in a space that is essentially a cartesian product over the ambiguous elements of an utterance. 2 THE NOTION OF A "LOGICAL VOCABULARY" Lexical association cannot be considered apart from a notion of "logical" or "conceptual" vocabulary - the set of descriptive constants of a logical language which are available for making such associations. This notion may be identified with the "domain model" or "conceptual model" of such systems as PHLIQA1 [11], TEAM [3] and IRUS [1]. Logical vocabularies, or "domains", are what the polysemy operators work with. The present section lays down the represen- tational structure which the next, dealing with the polysemy operators themselves, will make use of. Let a "domain" be defined as a set of descriptive constants and axioms involving them, subject to three conditions: (1) The descriptive constants are such that a specification of each of their extensions gives a "state of the world" relevant to the domain (2) The axioms are such that they constrain which states of the world are possible or allowable (3) The axioms do not define the constants with the biconditional, but with one-way implication only, thus leaving the con- stants primitive. If complete definitions of constants via lambda-abstraction is allowed it is only as a tech- nical convienenca; these are to be regarded as "extra". The latter condition (3) captures the important fact that domains are not definable in terms of other domains. Thus expressions cast in logical vocabulary O A cannot be directly used to refer to states of affairs, etc. expressible only in terms of logical vocabulary De. This has an impact for natural language question answering systems in which D A is the notions of or- dinary language and D B the logical vocabulary of some technical domain. In this case, only lexical items specially invented for the technical domain (such as "JP-5", a particular kind of military jet fuel) 180 have an unproblematic lexical association in terms of D 8. Obviously not all the words a user employs will have this characteristic, nor will all the constants of the technical domain be lexicalizable in thisway. In other cases the notions of D A will have to be mapped to those of D 8, in some way that is not yet specified. A common occurrence is for lexical items avail- able in regular English to be employed to bridge the gap, in such a way as to multiply their effective am- biguity. Consider a question seeking to find ships with a certain offensive capability: "What ships carry Har- poon missiles?". On a literal interpretation of the word "carry" the predication of the sentence is satisfied whether the ships "carry" the missiles as weaponry or as incidental cargo, yet of these only the first alter- native is the desired one. If the query were instead "What ships carry oranges?" the second alternative is the preferable one. The resultant "splitting" of lexical senses can be regarded as a form of ambiguity generated by the contact between logical domains. Other kinds of mapping between notions of dif- ferent domains are more complex, not taking place along the lines of greater or lesser specificity, but in- volving instead another kind of mapping that is really tantamount to metaphor. A phrase like "in Marketing", for example, is not locative in the literal sense of loca- tion in space but rather makes use of a metaphor having to do with this notion. Here the initial domain is that of space and spatial inclusion, while the final one is that of, say, fields of employment or expertise. The formal representation of metaphor used in this work is that of Indurkhya [7]. Indurkhya identifies a metaphor with the formal notion of a "T-MAP": a pair <F,S> where F is a function mapping descriptive con- stants from one domain to another and S is a set of sentences which are expected to carry over from the first domain to the second. A metaphor is "coherent" if the transferred sentences S are logically consistent with the axioms of the target domain, "strongly coherent" if they already lie in the deductive closure of those axioms. Depending on the formal language used to represent the statements S, one may encounter com- putational difficulties (i.e. decidability) with this program. One way around this is not to use predicate calculus (as Indurkhya does) but a language that is more restrictive than predicate calculus. For the price of surrending complete expressive power one gains the advantage of deductive tractability. One system which may be used for this purpose is the NIKL [10] system, in which only a few types of axioms can be encoded. A descriptive constant subsumes another of the same complex type if its extension is always a superset of the other. Two constants are disjoint if their extensions are always disjoint. (Note that respective subsumees of the two constants "inherit" this disjointness.) Relations of more than one argument have sortal (one-place predicate) restrictions on their argument, thus stipulat- ing that the extension of the relation will always be a subset of the cartesian product of the extensions of the sorts. Finally, a one-place predicate P restricts a binary relation R to be Q if the image under R of each member of P's extension is a member of the exten- sion of the second one-place predicate Q. In what follows I will treat this operation as restricting the form that the extension of the relation R may take on, so that the placing of constraint P on the first argument results in a propagation of the constraint Q on the second argument. 3 THE LEXICAL CONSTRAINT MODULE 3.1 Overview In this section I present a solution to the multiple problems of ambiguity posed by a natural utterance. Added to the architecture of semantic interpreter, dis- course model, lexicon and domain model is a new component - the lexical constraint module. It accepts from the semantic interpreter a logical form containing free variables of higher-order and constructs from it a constraint graph structure in which such variables are connected in accordance with the syntactic structure of the expression. This structure is then used in a constraint-propagation process that attempts to as- sign descriptive constant values to the expressions. The lexicon in this scheme stores for each non-logical word an extendable poiysemic complex (or com- plexes, in the case of homonymy) of logical associa- tions. I shall describe assumptions about the seman- tic rule set-up as I go along. In making these assignments, the module applies a "maxim of coherence". That is, we assume that the user will not deliberately speak nonsense to us, use terms redundantly, or make use of elaborate means to refer to the null set. A coherent outcome is one where the descriptive constants being applied to the same terms (bound variables and individual constants) are not sortaily disjoint. This may not always be achiev- able with the core sense of words. When it is not, a set of "polysemy operators" is invoked to re-interpret a lexical assignment in such a way as to make sense of the expression. I will first consider an example where no such re-interpretation is required. For the utterance "John has a car", the following logical form is given as input to the constraint module: (3 x (=at x) & (have John x)) The underlined symbols are the free variables. Sup- pose the main verb "have" to be homonymous be- tween the various predicates PART-OF, OWN, AFFLICTED-WITH. The last of these is eliminable because the argument sorts it requires and the the sorts given to it do not agree: physical objects and b 181 diseases are disjoint sets. Such surface inspection of argument sorts is not the only source of constraint, however. For some relations a particular constraint on the first argument causes a constraint on its second argument. Thus, the alternative PART-OF is eliminable because the parts of an organism must themselves be organic material, something clearly disjoint with artifacts like cars. The constraint graph is now satisfied, and we are left with: (3 z (CAR X) & (OWNS JOHN =)) 3.2 The polysemy operators We now proceed to overconstrained cases in which potential assignments are in conflict, and re- interpretation by the polysemy operators is required. For the first pair of such operators, genera/ization and exc/usion, we will make use of the Montague Gram- mar notion of universal sub/imation [2]. A universal sublimation of a concept A is just the set of properties which are true of all A's members, or: (kp (v x A(X) -> P(X})) Generalization and exclusion operate upon lexical senses by modifying their universal sublimations and looking for the alternative meaning (if any) of the word that most closely corresponds to this new set. As an example of genera/ization, consider the phrase "plastic silverware". While in literal terms this is oxymoronic, one often sees it used to refer to plas- tic eating utensils, and in situations where only these items are available, the word "silverware" alone may be used to denote them. Obviously for such speakers the class EATING-UTENSIL is available as an ex- tended and generalized sense of "silverware". The initial representation would be: (kX (and, (plasti= x) (silverwaz. x) ) ) A portion of the sublimation of the concept SILVER- WARE is the set {MADE-OF-SILVER, EATING- UTENSIL}. Of these, it is the first property that is disjoint with PLASTIC and a new sublimation is con- structed which excludes it. In the partial represen- tation above, this new sublimation is just the class EATING-UTENSIL itself. Exclusion takes a lexical sense onto one from which particular sub-senses have been explicitly ex- cluded. Consider the sentence "The Thresher is not a ship, it's a submarine", or, to be free about its logical form: (CONTRAST (ship Thresher) (submarine Thrlsher) ) If we assign the core meanings to these words this is nonsensical, since SUBMARINEs are, by definition, SHIPs as well. The expression coheres if whatever is assigned to ship excludes SUBMARINE. We form a partial sublimation {SHIP,~SUBMARINE}, and find corresponding to it the alternative sense of "ship", SURFACE-SHIP. A surprising number of words have such alter- native exclusionary senses, among them "axe", where HATCHET is excluded; "animal", where HUMAN is excluded; and "blue", where TURQUOISE (and other off-color shades) is excluded. The phenomenon seems to be that a specialized term for some distin- guished subset of a concept comes to be the preferred term for members of that subset. The all- embracing word can still be used, but it comes to have a sense which is contrastive with these distin- guished subsets. From the impression made by a Venn diagram of the set and its excluded subsets we might call this "cut-out" polysemy. One wonders if certain phenomena which have been described as ill-formedness might not in fact be instances of this sort of polysemy. Goodman, for in- stance, uses the actual word pair "blue" - "turquoise" as an example of "miscommunication"cite(Goodman85). What seems more plausible however, is that the speaker describ- ing a turquoise object as "blue" is not really misspeak- ing, but is rather using the word "blue" in the more inclusive sense which embraces all shades of the color. Metonymic extension re-interprets a predicate by interposing an arbitrary, sortally compatible relation between an argument place of the predicate and the actual argument. An example can be seen in the command "Highlight C3 tracks", where "C3" is a predication made of ships and "tracks" are trajectories of ship positions, traced out on a screen. Obviously, on literal interpretation, this utterance does not cohere, since physical objects (ships) and graphical objects (tracks) are disjoint. We have: (HIGHLIGHT " (kX (=3 X) & (track X) ) ) The categories SHIP and TRACK have too many clashing properties for generalization or exclusion to prevail. Instead, the two clashing elements are recon- ciled by finding a function or relation reaching be- tween SHIPs and TRACKs (or subsuming categories) and metonymically extending one of the items with it. The extended meaning of "C3" can be expressed by: (kx (3 Y (~a (sHzP Y) (SHIP-TRACK Y X) (c3 Y) ) ) In any usage of the metonomy operation there is a choice about which of two clashing elements to ex- tend. In this case it would also have been possible to have metonymically extended "track" instead of "C3" in this example. The resultant expression would then be a set of ships instead of tracks - clearly not what is wanted here. It would moreover not be an im- mediately coherent one itself, since "highlighting" can only be done on graphical objects. More importantly, it would seem to be that metonomies are less likely to 182 shift the head noun meaning, since this changes the sortal category of what is being referred to and operated upon by the utterance. This seems to be particularly strong when the head noun's meaning has an underlying functional role, as does "track" in this case. Note that many words which at first appear to have unitary senses are actually better described in terms of metonymic complexes. Thus, "window" can be used to refer to its constituent pane of glass, its sash, or the opening around it. Similar examples can be seen in "light", which can be used to refer to the actual electromagnetic radiation or the device for producing it, and "bank" (in the fiscal sense), which can be used to refer to the building or the financial institution itself. Metaphorical extension operates not by shifting an argument place of a predicate, but by shifting the predicate itself. Capturing the generality in the mean- ing of "mouth" in the example of section 1 involves capturing a class of metaphors involving that concept. Classes of metaphors are described by the notion of a pararneterized T-MAP, in which the mapping function F and set of sentences S are not completely specified, but may instead have missing elements which must be solved for. Let "mouth of the cave" be given by: (mouth (iota x (cave x))) The functional constant MOUTH is restricted to operate on individuals of the class ANIMAL, so the above is incoherent on literal interpretation. A metaphorical re-interpretation must select certain con- stants for the mapping function F and certain facts S which carry over to the new domain. Two such facts are: SUBSUMES (ENCLOSES-SPACE, ANIMAL) SUBSUMES (OPENING, MOUTH) In this use of the word "mouth" it is operating on in- dividuals of the class CAVE instead of ANIMAL. One element of the mapping function F is thus the pair (ANIMAL,CAVE). In order to determine the relation- ship that the word "mouth" really means in the ex- ample we must solve for a function variable P which MOUTH is mapped to. This function must be sortatly coherent with CAVE; it is the righthand member of the second ordered pair of the mapping function F. The sentences to be transferred are: SUBSUMES (ENCLOSES- SPAC~, CAV'E) SUBSUMES (OPENING-OF, P ) Of these, the first is not only not inconsistent, but true. One descriptive constant of the geological domain which is obviously not incoherent with CAVE is the function CAVE-ENTRANCE. If this function is used in place of P the second sentence is satisfied as well. An important metric of metaphorical plausibility is how much structure in S is transferred from source to target domain versus how many descriptive constants are mapped via the function F. In the present example the ratio is one. Clearly if this ratio is high the metaphor is stronger and more plausible; if it is low the metaphor is less so. 3.3 Nominal Compounds Nominal compounds are treated by assuming that the semantic rules formulate their interpretation with a free binary predicate variable standing in for the rela- tion which must be determined to complete the inter- pretation of the compound. Interpreting the nominal compound thus becomes solving for this predicate variable. This variable is initially unconstrained ex- cept by the sorts of the noun meanings it connects. A problem with some nominal compounds is that they seem to violate the restrictions imposed by their component parts. For example, a "staple gun" is not a weapon at all, and would thus on some treatments have to be treated either idiomatically or as a com- pletely incoherent expression. With the approach presented here, however, the polysemy operators can be invoked to find a re-interpretation of the words for which a solution does exist. The word "gun" can be re-interpreted to discard the clashing property of shooting bullets only, and to denote in this case the wider class of devices that eject objects of whatever type. An important point about nominal compounds is that they cannot be treated extensionally, A soup pot is still such whether it currently contains something different from soup, or indeed whether it contains any- thing at all. Clearly, the relation to be solved for in a nominal compound may in general be a non- extensional one between kinds, Such a relation may in turn have a meaning postulate which dictates which actual entities (such as the actual soup) may be re- lated at which indices of time. This phenomenon would seem to pose a problem for Hobbs and Martin [6], who view as a sub-problem resolving the refer- ence of the "lube oil" in the compound "tube oil alarm". One can imagine a "lube oil alarm" which only sounds when all the lube oil is gone. 3.4 Effect on Anaphora Resolution Even after syntactic and pragmatic considerations have been taken into account, the decision on the correct referents for anaphora cannot take place in- dependently of considerations of word meaning choice. Consider the following two sentences: (I) The table i8 in that building (2) It im • bank. The proper referent of "it" in (2) is constrained by the predication made by the ambiguous lexical item "bank", namely that it either be a RIVER-BANK or a BANK-BUILDING. Neither is sortally coherent with TABLE, the referent described by "the table" is elimin- 183 able. The only thing left is the individual described by "that building" and since BUILDING, being an AR- TIFACT, is disjoint with RIVER-BANK, the proper sense of "bank" is BANK-BUILDING and the referent of "it" is "that building". exclusion operator) that an alternative sense of "lion" means a male lion only. One should not presume, however, that the discovery of new lexical senses will occur on a constant basis. The last heuristic above is therefore an important one. 3.5 Algorithm and Heuristics The algorithm used by the lexical constraint module is a search loop consisting of just three parts - tentative assignment, constraint propagation and re-interpretation. Qn the first iteration tentative as- signment constrains each word-variable with its core logical sense, or the set of its core senses if it is homonymous. These serve as entry points to the polysemy complexes. Variables associated with anaphors are initially constrained by whatever prag- matic and syntactic (such as C-command) considera- tions are seen to apply. The variables associated with nominal compounds are initially left unconstrained. Thereafter, constraint propagation may end up in one of three states: satisfaction, in which case the module returns a single logical expression; underconstraint, in which case there is an ambiguity with which the user must be presented; overconstraint in which case re-interpretation is invoked to search for an interpretation which coheres. The most important issue in performing re- interpretation is controlling the process that the sys- tem does not "hallucinate" arbitrary meanings into an expression. The control heuristics include: 1. consider overconstrained variables for re-interpretation first 2. prefer generalizations and exclusions which modify a small number of properties 3. prefer metaphorical extensions with a high ratio of plausibility (as in Sec 3,2) and minimize the number of "augmentations" and "positings" ['7] 4. avoid multiple re-interpretations of the same item 5. prefer re-interpretations to already es- tablished polysemous senses instead of creating new ones In Hobbs' work [6] control turns on a notion of a "cost- function" associated with the lengths of proofs. The notion of "minimality" in that work has some similarity to the heuristics above, which seek to avoid arbitrary re-interpretations of lexical meanings by prefering conservative re-interpretations and discouraging mul- tiple ones. The creation by the polysemy operators of new sense for a word can effectively be regarded as a kind of "learning". Thus, given the sentence "That's not a lion, that's a lioness" the system could deduce (via the 4 CONCLUSIONS This component will be implemented in a future version of BBN's JANUS natural language under- standing system. Included in this system will be a unification parser with a large grammar and a new and improved semantic interpreter. I have tried to show how a compositional seman- tics need not be incompatible with a context- dependent notion of word meaning by making a divi- sion of labor between the rule-to-rule translation of syntactic structure and the complex semantics of lex- ical items. I shall even go so far as to say that such a division of labor is neccesary for the compositional program to succeed. A component which takes into account the creativity of lexical meanings and which utilizes knowledge representation and limited in- ference not only gives word meaning its proper place in a modular system but also has the potential of ex- tending coverage and flexibility beyond what is cur- rently available in natural language systems. Acknowledgements I would like to thank Remko Scha for his many useful comments on this work. I would also like to thank Erhard Hinrichs and Bob Ingria for their com- ments and encouragement, and Jessica Handler for valuable linguistic data. References [1] Bates, Madeleine and Bobrow, Robert J. A Transportable Natural Language Interface for Information Retrieval. In Proceedings of the 6th Annual Internationa/ ACM SIGIR Conference. ACM Special In- terest Group on Information Retrieval and American Society for Information Science, Washington. D.C., June, 1983. [2] David R. Dowry, Robert E. Wail, and Stanley Peters. Introduction to Montague Grammar. D, Reidel Publishing Company, 1981, [3] Barbara Grosz, Douglas E, Appelt, Paul Mar- tin, and Fernando Pereira. TEAM: An Experiment in the Design of Trans- portable Natural-Language Interfaces. Technical Report 356, SRI International, Menlo Park, CA, August, 1985. 184 [4] Is] [6] (7] C8] [9] [10] [11] Hinrichs, Erhard W. A Revised Syntax and Semantics of a Seman- tic Interpretation Language. 1986. Hobbs, Jerry R. Overview of the TACITUS Project. In Proceedings of the DARPA 1986 Strategic Computing Natural Language Workshop, pages 19-25. The Defense Advanced Research Projects Agency, May, 1986. Jerry R. Hobbs and Paul Martin. Local Pragmatics. In Proceedings, UCA/-87. International Joint Conferences on Artificial Intelligence, Inc., August, 1987. To appear. lndurkhya, Bipin. Constrained Semantic Transference: A Formal Theory of Metaphors. Technical Report 85i008, Boston University, 1985. Landsbergen, S.P.J. and Scha, R.J.H. Formal Languages for Semantic Represen- tation. In Allen and Petofi (editors), Aspects of Automatized Text Processing: Papers in Text/inguistics. Hamburg:Buske, 1979. Montague, R. The Proper Treatment of Quantification in Or- dinary English. In Jo Hintakka, J.Moravcsik and P.Suppes (editors), Approaches to Natural Lan- guage. Proceedings of the 1970 Stanford Workahip on Grammar and Semantics, pages 221-242. Dordrecht: D.Reidel, 1973. Moser, Margaret. An Overview of N/KL. Technical Report Section of BBN Report No. 5421, Bolt Beranek and Newman Inc., 1983. Scha, Remko J.H. Logical Foundations for Question-Answering. Phillips Research Laboratories, Eindhoven, The Netherlands, 1983. 185
1987
25
FLUSH: A Flexible Lexicon Design David J. Besemer and Paul S. Jacobs Artificial Intelligence Branch GE Corporate Research and Development Schenectady, NY 12301 USA Abstract Approaches to natural language processing that use a phrasal lexicon have the advantage of easily handling linguistic constructions that might otherwise be ex- tragrammatical. However, current phrasal lexicons are often too rigid: their phrasal entries fail to cover the more flexible constructions. FLUSH, for Flexible Lexicon Utilizing Specialized and Hierarchical knowl- edge, is a knowledge-based lexicon design that allows broad phrasal coverage. I. Introduction Natural language processing systems must use a broad range of lexical knowledge to account for the syntactic use and meaning of words and constructs. The problem of un- derstanding is compounded by the fact that language is full of nonproductive constructs--expressions whose mean- ing is not fully determined by examining their parts. To handle these constructs, some systems use a phrasal lex- icon [Becket, 1975, Wilensky and Arena, 1980b, Jacobs, .1985b, Steinacker and Buchberger, 1983, Dyer and Zernik, 1986], a dictionary designed to make the representation of these specialized constructs easier. The problem that phrasal lexicons have is that they are too rigid: the phrasal knowledge is entered in a way that makes it difficult to represent the many forms some expressions may take without treating each form as a dis- tinct "phrase". For example, expressions such as "send a message", "give a hug", "working directory", and "pick up" may be handled as specialized phrases, but this over- looks similar expressions such as "give a message", "get a kiss", "working area", and "take up". Specialized con- structs must be recognized, but much of their meaning as well as their flexible linguistic behavior may come from a more general level. A solution to this problem of rigidity is to have a hier- archy of linguistic constructions, with the most specialized phrases grouped in categories with other phrases that be- have similarly. The idea of a linguistic hierarchy is not novel, having roots in both linguistics [Lockwood, 1972, Halliday, 1978] and Artificial Intelligence [Sondheimer et al., 1984]. Incorporating phrasal knowledge into such a hierarchy was suggested in some AI work [Wilensky and Arena, 1980a], but the actual implementation of a hier- 186 archical phrasal lexicon requires substantial extensions to the phrasal representation of such work. The Flexible Lexicon Utilizing Specific and Hierar- chical knowledge (FLUSH) is one component in a suite of natural language processing tools being developed at the GE Research and Development Center to facilitate rapid assimilation of natural language processing technology to a wide variety of domains. FLUSH has characteristics of both traditional and phrasal lexicons, and the phrasal portion is partitioned into four classes of phrasal entries: • word sequences • lexical relations • linguistic relations • linguistic/conceptual relations FLUSH's mechanisms for dealing with these four classes of specialized phrases make use of both general and specific knowledge to support extensibility. FLUSH is the lexical component of a system called TRUMP (TRansportable Understanding Mechanism Pack- age) [Jacobs, 1986b], used for language analysis in multiple domains. This paper will describe the phrasal knowledge base of FLUSH and its use in TRUMP. II. Compound Lexical Knowledge in FLUSH Because the knowledge embodied in single word lexemes is not enough to account for nonproductive expressions, FLUSH contains phrasal entries called compound lezemes. This section first illustrates how each of the four classes of compound lexemes is represented in FLUSH and then de- scribes the algorithm for accessing the compound lexemes. So that the reader is better equipped to understand the fig- ures in the rest of this paper, the next paragraph briefly in- troduces the knowledge representation scheme that is em- ployed by FLUSH. Knowledge representation in FLUSH is uses Ace [Ja- cobs and Rau, 1984, Jacobs, 1985a], a hierarchical knowl- edge representation framework based on structured inher- itance. Most of Ace's basic elements can be found in other knowledge representation schemes (e.g., isa links, slots, and inheritance)[Bobrow and Winograd, 1977, Brachman and Schmolze, 1985, Wilensky, 1986], but Ace has the prep-up D compound-lexeme [ l° v-vp I p-vp [ verb-piclc 1 / D v-vp D verb-throw ........................................ I v-throw-up I P v-vp ~ I v-loo/c-u Figure 1: The compound lexeme verb-par~icle.zzx.up verb-loolc D unique ability to represent referential and metaphorical mappings among categories (see descriptions of re/and view below). The primitive semantic connections in an Ace hierarchy include the following: dominate -- defines an isa link between two categories. This relation is labeled with a "D" in the figures. (dominate action running) means that running is an action~i.e., action dominates running. manifest -- defines a constituent of a category. Unless a role-play applies (see below), this relation is labeled "m" in the figures. (manifest action actor) means that an action has an actor associated with it. This is analogous to a slot in other knowledge representations. role.play- establishes a relationship between a con- stituent (slot) of a dominating category and a con- stituent (slot) of a dominated category. In the figures, this relation is labeled with the appropriate role name for the constituent. (dominate action running (role-play actor runner)) means that in running, the role of actor (inherited from action) is played by the runner. ref -- defines a mapping between an entity in the linguis- tic hierarchy and an entity in the conceptual hierarchy. This relation is labeled "re]" in the figures. (ref lex-run running) means that when the lexical category lez-run is invoked, the concept of running should be invoked as well. This is the main chan- nel through which semantic interpretation is accom- plished. view -- defines a metaphorical mapping between two cat- egories in the conceptual hierarchy. (view transfer-event action (role-play source actor)) means that in certain cases, an action can be metaphorically viewed as a $ransfer.event, with the ]87 actor viewed as the source of the transfer. This brief introduction to Ace will help the reader un- derstand the descriptions of the representation and access of compound lexemes that are presented in the next two subsections. A. Compound Lexemes 1. Word Sequences Word sequences are phrases such as "by and large" and "let alone" that must be treated as compound words because there is little hope in trying to determine their meaning by examining their components. Internally, these word sequences may or may not be grammatical (e.g., "kick the bucket" is internally grammatical, but "by and large" is not). Because type of compound lexeme is very specific, a separate category exists for each word sequence under the general category of word-sequence. Lexical constraints are placed on the different constituents of the word-sequence relation by dominating them by the appropriate simple lexeme. This is one method that can be used to establish constraints on compound lexemes, and it is used through- out the compound lexeme hierarchy. 2. Lexical Relations Lexical relations include compound lexical entities such as "pick up" and "sell out" that can appear in a va- riety of surface forms, but have some general relationship among their simple lexeme constituents. Compound lex- emes such as verb-particles ("pick up"), verb-prepositions ("take to"), and helper-verbs ("get going") all fall into the category of lezical relations. In contrast to the indi- vidual subcategories of word sequences, there are many entries that fall underneath each individual subcategory of lexical relations. Most of the entries under these sub- categories, however, share constituents with other entries, which makes generalizations possible. For example, Fig- ure 1 shows how all verb-particles that have up as the par- whole-verb I D base-va I mod-va rood [ compound-lexeme [ [prep-phrase I tD , ~ base I /~D " ,, ~.~ m~'~ rood I I whole-noun ~. rnod-rel ~ _ ~ .~ r-" tD "° I I KY£ ...... , rood prep-root Figure 2: The modifying-relation compound-lexeme hierarchy. ticle (e.g., "pick up", "throw up", "look up') are repre- sented. This generalization in representing seemingly specific phrases is what makes FLUStt extensible. If a new verb- particle with up as the particle is added to the system (e.g., "hang up"), it inherits everything except the verb from the structure above it--that is, the general properties of verb- particle relations are inherited (such as the transposition of the particle with the object "it"), and the specific prop- erties of verb-particles having the preposition "up" (the constraint on the preposition itself, and possibly some de- fault semantics for the particle) are inherited. 3. Linguistic Relations Linguistic relaiions are invoked according to con- straints on their constituents, where the constituents may be simple lexemes, compound lexemes, or syntactic struc- tures. An example occurs in the sentence "John was sold a book by Mary" where the object of the preposition is the main actor of the event described by the verb. This condition occurs only when the whole verb'is in the passive form (constraint 1) and the preposition in the modifying prepositional phrase is by (constraint 2). Linguistic relations are difficult to represent for two reasons: their constituents are not always simple lexemes and usually there are additional constraints on each con- stituent. It has been found, however, that a great deal of generality can be extracted from most of the linguistic re- lations to make accessing them easier. The best example of a linguistic relation is the class of the modifying prepo- sitioval phrases. In some instances, prepositional phrases modify noun phrases and verb phrases in almost the same way (e.g., "The man on the hill is a skier" and "We had a picnic on the hil?'). In other cases prepositional phrases modify noun phrases and verb phrases in completely dif- ferent ways (e.g., "The man by the car is my father." and "The boy was hit by the car."). FLUSH is able to represent both types of linguistic relation by having more than one level of generic representation. Figure 2 shows the gen- eral modifying relation (mod.rel) at the first level below compound-lexeme. Prepositional phrases that are homo- geneous across noun phrases and verb phrases are repre- sented underneath this category. Below rood.tel in Figure 2 are the verb-adjunct (va) and noun-post-modifier (npm) categories, which make up the second level of generic repre- sentation. Prepositional phrases that modify verb phrases and noun phrases differently are represented underneath these categories. As an example, in Figure 2 the rood-tel category has the more specific modifying relation mod-rel-zzz.from un- derneath it, which is a modifying relation where the prepo- sition in the modifier is prep-from. Example uses of this prepositional phrase are found in the sentences: "The man arrived from New York" and "The woman from Boston is my aunt". 4. Lingulstic/Conceptual Relations These are expressions that cannot be easily handled as exclusively linguistic constructs, such as "giving per- mission", "getting permission", and "having permission". These expressions can be represented as an abstract pos- session concept where the possessed is ':noun-permission", thus combining a class of concepts with a lexical category. These compound lexemes have the unique character- istic of allowing linguistic relations to have explicit con- ceptual constraints. In the phrase "give a hug" there is an abstract relationship between the concept of giving and the simple lexeme noun.hug that implies the concept of hugging. Figure 3 shows the representation of this linguis- tic/conceptual relation. This kind of compound lexeme is invoked by the semantic interpreter, rather than by the parser, during a process called concretion--making con- cepts more concrete. The scope of this paper does not per- mit a discussion of concretion, but refer to [Jacobs, 1986b] for more information. The descriptions in this section illustrate how FLUSH is able to represent a wide range of lexical phenomena in a hierarchical and uniform manner. The four classes of compound lexemes that are described encompass many of the usually problematic expressions in natural language, yet they are represented in a way that supports extension and adaptation. The next section describes how these rep- resentations are accessed by FLUSH. 188 l linguistic~conceptual mm~ concept ] DI lexeme I I lc-give-xxx hi gi~ing vl oo. /° ¢-lc-~ive-xxx I k,,x I lexemeN~ l_lc_~iv/eDu~[ % Figure 3: The linguistic/conceptual relation Icr-give-hug. B. Access Although the compound lexeme representations illustrated in the previous section differ, FLUSH is able to employ a fairly flexible algorithm for accessing them. When the parser encounters a relation that may constitute a com- pound lexeme, it passes the name of the relation and the constituents that fill the appropriate roles to FLUSH. If FLUSH finds a compound lexeme that satisfies the con- straints, it passes the lexeme back to the parser. For example, if TRUMP is working on the sentence ":John picked up the book", it encounters a possible verb- particle relationship between the verb "picked" and the preposition "up". When this relationship is apparent to the parser, FLUSH is called with the verb-part relation with the constituents of pt-verb.pick as the verb and prep-up as the particle: (find-compound verb-part (v-verb-part pt-verb-piek) (p-verb-part prep-up) ) In this example, the compound lexeme verb.part-pick- up is found by FLUSH and is returned to the parser. If instead the sentence is ":John meditated up the hill", the parser takes the same action, but no compound lexeme is found by FLUSH because "meditated up" has no special meaning. FLUSH uses a two step procedure to locate specific compound lexemes. First, entries below the given relation in the hierarchy are checked to see if any of them sat- isfy the given constraints. If a compound lexeme exists, it is usually found during this step. There are some cases, however, in which the desired compound lexeme exists as a subcategory of an ancestor of the given relation. This situation was seen in the description of the modifying rela- tion (rood-tel), verb-adjunct (va), and noun-post-modifier (npm) in the previous section (see Figure 2). In this case, a second step in the search process looks at the sibling cat- egories. This process continues until either the top of the compound.lexeme hierarchy is reached (which happens im- mediately for most relations) or until a suitable compound lexeme is found. The process of finding a compound lexeme below the given relation is a matching problem. In response to the example call to find-compound above, the lexi- con proceeds to look at the defined categories underneath verb-part, which include verb.part-¢ZZoUp, verb-part-¢xz- out, verb-part-z~zx-off, etc., to see which one(s) satisfies the constraints, verb-part.zzz-up is found as a possibility, re- sulting in the same function being called recursively with the remaining constraints to find an appropriate category below it: (f ind-eompound verb-part-xxx-up (v-verb-part p~-verb-pick) ) This process is repeated until one of two conditions oc- curs: either the given constraints are exhausted, in which case a category that satisfies all of them has been found; or there are no more categories to search but there are still constraints left, in which case no match has been found and it may be appropriate to search the ancestors' sub- categories. In this example, the verb-part-pick-up category is found and returned on the second recursion, therefore, there is no need to search the hierarchy at a higher level. If instead the parser is working in the sentence "The man arrived from New York", it encounters a possible verb-adjunct (va) relation between the verb "arrived" and the prepositional phrase "from New York". The lexicon is called with the va relation, but the first step in the search process (i.e., looking below the given relation) does not yield a compound lexeme because mod-rel-zxx-from is de- fined in terms of the rood.tel relation rather than in terms of the va relation (see Figure 2). So even though the re- lation that the parser encounters in the pattern is a verb- adjunct relation, the lexicon is flexible enough that it can apply more general knowledge to the retrieval problem. The meanings of compound lexemes are represented and accessed using a reference pointer that links the lin- guistic category to a conceptual structure. Some of the conceptual reference pointers for compound lexemes are more complicated than simple lexical access because of- ten there are several components that need to be mapped, but they are still defined in terms of the ref association [Jacobs, 1986a]. The example form below defines a refer- ence from the compound lexeme mod-rel-zxz-from to the transfer-event concept: (ref transfer-event <-> mod-rel-xxx-from (source <-> m-mod-rel-xxx-from)) This reference establishes that the modifying relation mod-rel-zzx-from should invoke the transfer-event concept, and the modifier part of mod-rel-zzx-from, namely m-mod- rel-zxz-from, should fill the role of source in this transfer- event. In the sentence "The man arrived from New York", 189 the prepositional phrase "from New York" invokes rood. rel-zxx-from. In turn, the transfer-event concept is invoked with "New York" as the source of the transfer. The explanations above illustrate that FLUSH is capa- ble of representing and accessing most of the different types of lexical knowledge that natural language processing sys- tems need to have. They also show how FLUSH can do most of it in a general manner, making extensions fairly straightforward. FLUSH is equipped also with a mecha- nism for automatic acquisition of new lexemes, described in [Besemer, 1986]. The discussion that follows concentrates on the application of the hierarchical lexicon to semantic interpretation in TRUMP. III. Semantic Interpretation using FLUSH Section II. described the organization of the FLUSH lexi- con, distinguishing several classes of lexical knowledge and showing the use of a hierarchical knowledge representation in representing examples of each class. One goal of this hierarchical organization is parsimony: because categories of compound lexemes inherit their constraints from more general categories, the number of linguistic constraints en- coded explicitly can be reduced. A second function of the hierarchical representation, perhaps more important, is to facilitate the interpretation of the meaning of a compound lexeme. Semantic interpretation is facilitated by each of the classes of compound lexemes discussed in section II.. The simple example of word sequences allows the semantic in- terpreter to set aside the meanings of the individual words to interpret phrases such as "by and large" and '~¢ick the bucket" correctly. Lexical relations, such as "pick up" and "working directory", permit the association of spe- cialized meanings as well as the contribution of certain flexible lexical classes to the meaning of a phrase. For ex- ample, the phrase "branch manager" is interpreted using knowledge that it belongs to a lexical category common with "lab manager" and "program manager". Linguistic relations such as mod-rel-~zx-fram permit general lexical knowledge to apply to the filling of conceptual roles. Lin- guistic/conceptual relations such as let-give-hug permit the specialized interpretation of expressions such as "give a hug" in a broad range of surface forms. The following examples illustrate the operation of the TRUMP semantic interpreter and its use of the FLUSH lexi- con. Example 1: Send the laser printer characteristics to the branch manager. Processing the above sentence stimulates a steady flow of information between TRUMP'S parser and semantic in- terpreter and the FLUSH lexical access mechanism. The lexical analyzer recognizes "laser", "printer" and "charac- teristics" as nouns, but the search for compound lexical entries is activated only as the parser recognizes that the nouns form a compound. The specific entry for "laser printer" in the FLUSH lexicon, returned using the com- pound access method described in the previous section, provides two important pieces of information to TRUMP: First, it gives the semantic interpreter the correct meaning of the phrase, permitting TRUMP to forbear consideration of interpretations such as "a printer that prints lasers". Second, it enables the parser to favor the grouping [[laser printer] characteristics] over [laser [printer characteristics]] and thus come up with a viable meaning for the entire phrase. The handling of the relationship between "charac- teristics" and "laser printer" makes use of the middle- level category en-~xx.characteristic, much like the verb- par~icle.~-up category described in section II. The cn- XZXocharac~eris~ic category, representing compound nomi- nals whose second noun is "characteristic", is associated with its meaning via a I%EF link in the following way: (ref characteristic <->. cn-xxx-charac~eristic (manifes~er <-> In-cn-xxx-charac~eris~ic)) The above association, in which ln.cn.~:zz-charac~er~stic denotes the first noun of a particular nominal compound, suggests the interpretation "characteristics of the laser printer". The treatment of this association as a middle-level node in the hierarchical lexicon, rather than as an independent lexi- cal entry, has two features: First, it is often overridden by a more specific entry, as in "performance characteris- tics". Second, it may cooperate with more specific lexical or conceptual information. For example, the conceptual role manifesIer is a general one that, when applied to a more specific category, can lead to a specific interpretation without requiring a separate conceptual entry. This would happen with "laser printer performance characteristics". The phrase "branch manager", like "laser printer characteristics", is interpreted using an intermediate en- try en.zzx-manager. While FLUSH has the capability, like PHRAN [Wilensky and Arens, 1980b], to constrain this category with the semantic constraint that the first noun must describe a bureaucratic unit, it is at present left to the semantic interpreter to determine whether the preced- ing noun can play such an organizational role. Example 2: Cancel the transmission to the printer. In this example, the lexical access mechanism must determine that "to the printer" invoked the mod-rel-~zz- to linguistic relation, which can be attached either to the verb "cancel" or the nominal "transmission". The seman- tic interpreter then finds the following association: (ref ~rans~er-even~ <-> mod-rel-xxx-~o ]9O (destination <-> m_mod-rel-xxx-to)) The REF association above indicates that the object of the preposition "to" is related to the destination role of some generalized transfer event. Since "cancel" describes no such event, but "transmission" does, TRUMP correctly interprets "printer" as being the destination of the trans- mission. This allows the semantic interpreter to handle this example much in the same way as it would handle '`Transrnit the job to the printer n, because the rood-tel re- lation class includes both postnominal modifiers and ad- verbial prepositional phrases. As in the previous example, the semantic interpreter can make use of the interaction between this general interpretation rule and more specific knowledge; for example, "the sale of the the book to Mar!f' invokes the same mod-rel.xxx-to relation, but the role of Mary is determined to be customer because that role is the conceptual specialization of the destination of a trans- fer. The process of correctly determining a conceptual role using linguistic relations is described in [Jacobs, 1987]. Example 3: How many arguments does the command take? There are two major differences between this example and the previous two: First, the lexicon is driven by in- formation passed from TRUMP~S semantic interpreter, not only from the parser. In the previous example, the parser recognizes a potential relationship between a verb or nom- inal and a prepositional phrase. In this case, the semantic interpreter must determine if the conceptual relationship between the concept of taking and the term "arguments" invokes any special lexical knowledge. Second, the inter- pretation of "take arguments" is not a specialization of an abstract concept such as transfer-event, but rather is a re- sult of a metaphorical view mapping from this concept to the concept of command-execution. The interpretation of this sentence thus proceeds as follows: At the completion of the syntactic parse, the se- mantic interpreter produces an instantiation of the con- cept taking with the object arguments. The lexical access system of FLUSH, using the same discrimination process that determines a specialized linguistic relation, identifies Icr-transfer-arguments as a linguistic/conceptual relation invoked by the concept of a transfer with the lexical term "argument" attached to the conceptual object role. The same linguistic/conceptual relation is invoked by "giving arguments" or "getting arguments". The semantic inter- preter continues by determining the metaphorical map- ping between the transfer-event concept and the command- execution concept, a mapping that derives from the same conceptual relationships as other similar metaphors such as "The recipe takes three cups of sugar." In this way the amount of specialized information used for "take ar- guments" is kept to a minimum; effectively, FLUSH in this case is merely recognizing a linguistic/conceptual trigger for a general metaphor. This section has described the application of the FLUSH lexicon to the process of semantic interpretation in the TI~UMP system. The examples illustrate some charac- teristics of the flexible lexicon design that differ from other phrasal systems: (1) There are a broad range of categories to which specialized information may be associated. The treatment of "branch manager" and "transmission to" il- lustrates the use of compound lexical knowledge at a more abstract level than other programs such as PHRAN. (2) The hierarchical lexicon reduces the number of phrasal en- tries that would be required in a more rigid system. Ex- pressions such as "take arguments" and "get arguments" share a common entry. (3) The quantity of information in each phrasal entry is minimized. Linguistic constraints are often inherited from general categories, and the amount of semantic information required for a specialized entry is controlled by the method of determining an appropriate conceptual role. The "take arguments" expression thus does not require explicit representation of the relationships between linguistic and conceptual roles. IV. Conclusion FLUSH is a flexible lexicon designed to represent linguistic constructs for natural language processing in an extensi- ble manner. The hierarchical Organization of FLUSH, along with the provision for a number of types of phrasal con- structs, makes it easy to use knowledge at various levels in the lexical hierarchy. This design has the advantage of handling specialized linguistic constructs without being too rigid to deal with the range of forms in which these constructs may appear, and facilitates the addition of new constructs to the lexicon. FLUSH permits the correct se- mantic interpretation of a broad range of expressions with- out excessive knowledge at the level of specific phrases. References [Becker, 1975] J. Becker. The phrasal lexicon. In Theo- retical Issues in Natural I,anguage Processing, Cam- bridge, Massachusetts, 1975. [Besemer, 1986] D. Besemer. FI, USH: Beyond the Phrasal I, ezicon. Technical Report 086CRD181, General Elec- tric Corporate Research and Development, 1986. [Bobrow and Winograd, 1977] D. Bobrow and T. Wino- grad. An overview of KRL, a knowledge representa- tion language. Cognitive Science, 1(1), 1977. [Brachman and Schmolze, 1985] R. Brachman and J. Schmolze. An overview of the KL-ONE knowledge representation system. Cognitive Science, 9(2), 1985. [Dyer and Zernik, 1986] M. Dyer and U. Zernik. Encod- ing and acquiring meanings for figurative phrases. In Proceedings of the 24th Annual Meeting of the Associ- ation for Computational I,inguistics, New York, 1986. 191 [Halfiday, 1978] M. A. K. Halfiday. Language as Social Semiotic. University Park Press, Baltimore, Mary- land, 1978. [Jacobs, 1985a] P. Jacobs. A Knowledge.Based Approach to Language Production. PhD thesis, University of California, Berkeley, 1985. Computer Science Divi- sion Report UCB/CSD86/254. [Jacobs, 1985b] P. Jacobs. PHRED: a generator for nat- ural language interfaces. Computational Linguistics, 11(4), 1985. [Jacobs, 1986s] P. Jacobs. Knowledge structures for nat- ural language generation. In Proceedings of the Eleventh International Conference on Computational Linguistics, Bonn, Germany, 1986. [Jacobs, 1986b] P. Jaeobs. Language analysis in not-so- limited domains. In Proceedings of the Fall Joint Computer Conference, Dallas, Texas, 1986. [Jacobs, 1987] P. Jscobs. A knowledge framework for nat- ural language analysis. In Proceedings of the Tenth International Joint Conference on Artificial Intelli- gence, Milan, Italy, 1987. [Jacobs and Ran, 1984] P. Jaeobs and L. Rau. Ace: asso- ciating language with meaning. In Proceedings of the Eiz~h European Conference on Artificial Intelligence, Piss, Italy, 1984. [Lockwood, 1972] D. Lockwood. Introduction to Strat- ificational Linguistics. Harcourt, Brace, and Jo- vanovich, New York, 1972. [Sondheimer et ai., 1984] N. Sondheimer, R. Weischedel, and R. Bobrow. Semantic interpretation using KL- ONE. In Proceedings of the Tenth International Conference on Computational Linguistics, Palo Alto, 1984. [Steinscker and Buchberger, 1983] I. Steinacker and E. Buchberger. Relating syntax and semantics: the syntactico-semantic lexicon of the system VIE-LANG. In Proceedings of the First European Meeting of the ACL, Piss, Italy, 1983. [Wilensky, 1986] R. Wilensky. Knowledge representation - s critique and s proposal. In J. Kolodner and C. Ries- beck, editors, Ezperience, Memory, and Reasoning, Lawrence Erlbaum Associates, HiUsdale, New Jersey, 1986. [Wilensky and Arens, 1980a] R. Wilensky and Y. Arens. PHRAN-A Knowledge-based Approach to Natural Language Analysis. Electronics Research Laboratory Memorandum UCB/ERL M80/34, University of Cal- ifornia, Berkeley, 1980. [Wilensky and Arens, 1980b] R. Wilensky and Y. Arens. PHRAN-a knowledge-based natural language under- stander. In Proceedings of the 18th Annual Meet- ing of the Association for Computational Linguistics, Philadelphia, 1980. 192
1987
26
The Derivation of a GrammaticaUy Indexed Lexicon from the Longman Dictionary of Contemporary English Bran Boguraev t, Ted Briscoe§, John Carroll t, David Carter t and Claire Grover§ t Computer Laboratory, Universityof Cambridge Corn Exchange Street, Cambridge CB2 3QG, England § Department of Linguistics, University of Lancaster Bailrigg, Lancaster LA1 4YT, England Abstract We describe a methodology and associated software system for the construction of a large lexicon from an existing machine-readable (published) dictionary. The lexicon serves as a component of an English mor- phological and syntactic analyesr and contains entries with grammatical definitions compatible with the word and sentence grammar employed by the analyser. We describe a software system with two integrated com- ponents. One of these is capable of extracting syn- tactically rich, theory-neutral lexical templates from a suitable machine-readabh source. The second sup- ports interactive and semi-automatic generation and testing of target lexical entries in order to derive a size- able, accurate and consistent lexicon from the source dictionary which contains partial (and occasionally in- accurate) information. Finally, we evaluate the utility of the Longman Dictionary of Contemporary EnglgsA as a suitable source dictionary for the target lexicon. 1 Introduction Within the larger framework of the Alvey Programme of advanced information technology -- a research and development initiative set up in the UK to promote collaborative research projects ~imed at several en- abling key technologies -- a coordinated e~ort to build a natural language toolkit for the use by the wider aca- demic and industrial community is being carried out jointly by groups at the Universities of Cambridge, Lancaster and Edinburgh. The goal of these three closely related projects is to produce directly compatible rule systems and associ- ated software, capable of functioning together as an in- tegrated system for morphological and syntactic pars- ing of texts. The projects aim to deliver, respectively, a 8entente grammar of English together with a toord list indexed to the grammar, a combined inflectional and derlvational morphological ana/y~er and dictionary 8~s- tent, and a parser for the grammatical formalism used. The work is being carried out within the theoretical framework of Generalized Phrase Structure Grammar (Gszdar et ai., 1985), but many of the mechanisms would be usable without a theoretical commitment to GPSG. It is envisaged that the complete integrated toolkit will be used by a number of research and de- velopment groups, as a base component for a range of applications. The potential requirements of a diverse user community motivate, in particular, the need for a morphological and syntactic anaiyser with wide cov- erage of English grammar and vocabulary. Briscoe et al. (1987) describes the sentence grammar formalism and current coverage of the English grammar in detail. Russell et al. (1986) describes the morphological anal- yser and dictionary system. Further relevant details of both projects are provided in section 2. As part of the grammar project, in tandem with the development of the grammar proper, work is un- derway to develop a sizeable word list which will be in- tegrated with an existing lexicon of about 4000 words, hand crafted by the morphology project. The cover- age of this word list and its compatibility with the sentence grammar, word grammar and existing lexi- con k critical for the complete analysis system. The word list need only contain base and irregular entries, as productive inflectional and derivational variants are analysed at run-time on the basis of the word gram- mar. Therefore, when the word list is integrated with the existing lexicon and dictionary system it will form a dynamic system for word analysis, and not just a repository of word forms used for simple lookup. An additional constraint on the content of the tar- get word Ust comes from the fact that even though there k no provision for the analysis system to handle semantics, there is still the need to provide a minimal, theoretically neutral extension to the grammar rules and lexical entry format to allow subsequent integra- tion of a semantic component: thus information con- ceruing eg. the predicate-argument structure of verbs and their logical types must be made available in the lexical entries. The question tl~en arises of how to develop such a detailed and substantial word list. Our approach has been to make use of the machine-readable source of a published dictionary, namely the Longr~sn Dictionary of Contsraporarll Engtish (henceforth LDOCE) (Proc- ter, 1978). Apart from the obvious motivation of at- tempting to derive a large list of words from a comput- erised source, LDOCE is particularly relevant to this project since it o~ers, among other things, through a highly elaborate and semi-formal system of 9ram~zar codes, detailed information about the grammatical be- haviour of individual words. We have mounted the dictionary on-line and, following its conversion into a flexible lexical knowledge base (as described in Bogu- raev et M., 1987), a range of experiments have since been carried out with the aim of establishing LDOCE's appropriateness to the task of deriving a word list with associated grammatical definitions indexed to the analyser grammar. Section 3 below describes the syn- tactic level information available in, and extractable from, LDOCE and summarises the description of an operational program used to derive such information. The attempt to use semi-form~Aised, and occasion- ally inaccurate, information for constructing a large computerised lexicon raises a number of practical prob- lems. In order to make maximal use of the rich syn- 193 tactic data in the source machine-readable dictionary (MR/)), we have designed a lexicon development sys- tem which embodies a methodology for a semi-automa- tic interactive cycle of lexical entry generation and testing. This is described in section 4. 2 The target lexicon Given the goal of the toolkit projects to provide a led- con capable of supporting morphological and syntactic analysis of English, there is a precise definition of the information required in lexical entries. Both the gram- mar and morphology projects have adopted a feature system based largely on that described in Gasdar et al. (1985). A lexical entry will contain features relevant either to the word grammar or sentence grammar, or both, represented as a list of feature name / feature value pairs. In Figure I we show a fragment from the hand crafted lexicon developed as part of the morphol- ogy project (Russell et al., 1986). Here we concentrate on the feature-value sets carrying the syntactic infor- mation; the complete entries have also semantic and user fields, which are of no relevance to this paper. believe (V *. ]1 -, BAIL O, AG~ [BAi 2, V -, If *. ~'01H NOel. PlO -. ~ -. V01O +, AUI -, ISFL +, FI] -. VFORN BSE, TAT -. SUBCAT OK] [V ÷, ~ -, BaIL O. AGIt [BAR 2. V -. lJ *, l~'01Lq ![01)4], PID -, gF.A -, VOBD % AUX -, ISFL % FI! -, VFORM BSZ, IAT -, SUBCAT I'10NP] IV*. I -. B~ O. A~I. [BkR 2. V -. I ÷. NFO~ NoEq]. PRD-, ~-, woRD % AUX-, I]nq, % FIN -. VFOEq BSE. IAT -. SUBCAT IP..AP] [V ÷, N -, BA.i O. AGR [BE~. 2, V -. N ÷. NFOR/4 N0a.q]. FBD -. ~ -. V0RD +. AUX -. I~FL 4. FI~ -. YFOIH BSE. rat -. SO'CAT SFI]J] Figure 1: Sample lexical entries An almost complete list of the feature names and potential values which may occur as part of the lex- ical entry for a given morpheme is given in Figure 2 overleaf. Grover et al. (1987) contains a complete description of the features used in the sentence grzm- mar; P,.itchie et ~l. (1987) offers an equally complete description of the morphological and syntactic features relevant to the operations of the word grammar. For the purposes of this paper, we present a brief overview of the sentence grammar feature system. With exception of the features N, V and BAR, used to define the major categories of the grammar, most features can be classlfied in terms of the cate- gories they apply to. For each major category type there is a set of head features which must appear on all instances of that category type, regardless of their BAR feature value. Further features must (or may) be associated only with some instances of a category type, depending on the value of their BAR feature (or, on occasions, some other feature). The sets of head features for the four major categories axe: VERBALHEAD {PRD FIN AUX VFORAI PAST AGR} NOMINALHEAD {PLU POSS CASE PN COUNT PRD PRO PART NFORM PER} PREPHEAD {PFORM LOC PRD} ADJHEAD {AFORM PRD QUA ADV NUM NEG PAI~I ~ AGR DEF}. The features appearing on certain categories in ad- dition to the sets defined above are COMP, IN'V, NEG and SUBCAT which are relevant to verbal categories; SPEC, DEF and SUBCAT, applicable to nominal cat- egories; GERUND, POSS and SUBCAT for preposi- tional categories; and SUBCAT alone for adjectival categories. With exception of SUBCAT, which must be specified for all lexical entries, and the respective head features sets, the only other features required by the lexical nodes in the grammar are NEG, and DEF. Features like SLASH, WH, UB and EVER, which are required by the grammar to implement the GPSG treatment of certain linguistic phenomena, are of no relevance to this paper. The feature set in Figure 2 overleaf defines the in- formation about lexical items which will be required to construct a lexicon compatible both in form and content with the rest of the analysis system. Some of these features, (such as FIX) are specific to bound morphemes(these include, for example, entries for uztive", ~ng ~ or "nessJ). Other features (for instance WH, REFL) are specific to closed class vocabulary items, such as interrogative, relative and reflexive pro- nouns. Bound morphemes and closed class vocabulary are exhaustively defined in the hand crafted lexicon. However, this lexicon inevitably only contains a few examples of the much larger open class vocabulary° In order for the word and sentence grammars to func- tion correctly, open class vocabulary must be defined In terms of the feature set illustrated overleaf (Figure 2a). The features relevant to the open class vocabulary can be divided into those which are predictable on the basis of the part of speech of the item involved, those which follow from the inflectional or deriv~tional morphological rules incorporated into the system, ~nd those which rely on more specific information than part of speech, but nevertheless must be specified for each individual entry. For example the values for the features N, V and BAR in the sample entries above follow from the part of speech of ~oelieve = . The values of PLU and PER are predictable on the basis of the word grammar rules and need not be independently specified for each entry. On the other hand, the values of SUBCAT and LAT are not predictable from either part of speech or general morphological information. We concentrate on this last class of features which must be specified on an entry-by-entry basis in any lexicon which is going to be adequate for supporting the analysis system. Within this class of features some (eg. LAT, AT or BARE..ADJ) are only relevant to the word grammar. It is clear that those features that are derivable from the part of speech information are re- coverable from virtually any/vfl~). However, most (if not all) of the features in the third class above are not recoverable from the majority of ~[]~.Ds. As indicated above, LDOCE appears to be an exception to this generai]sation, because it employs a system of gram- matical tagging of major syntactic classes, offering de- tailed information about subcategorisation, morpho- logical irregularity and broad syntactico-semantic in- formation. 194 BAR {-10 12} V {-+} N {-+} PRD {- .4-} qUA {- +} ADV {- ÷} FXN {- +} PAST {- +} PLU {- +} a. open class vocabulary AT {-+} LAT {- ÷} AGR a catesory STEM a category SUBCAT { ........ PRED INF NP AP NOPASS SFIN VPINF SINF OR IT_SUBJ PPFROM PPTO TWONP FOR.S LOC S-SUBJ NP..NP NP_AP OE SR1 DETH AND ........ } INFL {- .4-} COUNT {- ÷} PN {- +} PER {1 '~ S} CASE {HeM ACC} BAR,Z._ADJ {- +} AFOR/%4 {ER EST NONE} NFOIqU~4 {IT THBR~- NORM} VFORIN/ {BSE EN ING TO} FIX {PRE SUF} INV {- ÷} AUX {- +) NEC {- +} DEF (- "4"} SLASH a category b. closed cIMs vocabulary and aH~es COMPOUND {NOUN VERB ADJ NOT} TITLE {- +} pOSE {- +} PFO~ {WITH OF FROM AT ABOUT TO ON IN FOR AGAINST BY} REFL a category WH {- +} uB {Q R) EvER {- +} PRO {- +} PRT {AS IN OFF ON UP) Figure 2: Features and feature values 3 The source data It turns out that even though the grammar coding system of LDOCE is not GPSG specific, it encodes much of the information which GPSG requires relat- ing to the subcategorisation classes in the lexicon. The Longman lexicographers have developed a representa- tional system which is capable of describing compactly a variety of data relevant to the task of building a lex- icon with grammatical definitions; in particular, they are capable of denoting distinctions between count and ma~ nouns ('do~ vs. Sdesire'), predicative, postpos- itive and attributive adjectives ('asleep" vs. "elect" vs. "jocular~), noun and adject|ve complementation (~ondness', Tact') and, most importantly, verb com- plementation and valency. 8.1 The Longman grammar coding system Grammar codes typically contain a capital letter, fol- lowed by a number and, occasionally, a small letter, for example [TSa] or [V3]. The capital letters encode information "about the way a word works in a sen- tence or about the position it can fill" (Procter, 1978: xxviii); the numbers "give information about the way the rest of a phrase or clause is made up in relation to the word described" (ibid.). For example, "T" denotes a transitive verb with one object, while "5" specifies that what follows the verb must be a that clause. (The small letters, eg. "a" in the case above, provide infor- mation related to the status of various complementis- era, adverbs and prepositions in compound verb con- structions: here it indicates that the complementiser is optional.) As another example, '~r3" introduces a verb followed by one object and a verb form (V) which must be an infinitive with to (3). In addition, codes can be qualified with words or phrases which provide further information concerning the linguistic context in which the described item is likely, and able, to occur; for example [Dl(to)] or [L(to be)l]. Sets of codes, separated by semicolons, are as- sociated with individual word senses in the lex/cal en- try for a particular item, as the entry for ~feel", with extracts from its printed form shown in Figure 3, il- lnstrates. These sets are el/ded and abbreviated in the code field associated with the word sense to save space in the dictionary. Partial codes sharing an ini- tial letter can be separated by commas, for example [Tl,Sa]. Word qualifiers relating to a complete se- quence of codes can occur at the end of a code field, delimited by a colon, for example [TI;I0: (DOWN)]. faol I • 1 [T1,6] to get the knowledge of by touching with the fingers: ... 2 [Wv6;Tl] to experience (the touch or movement of some- thing): ... S [L7] to experience (a condition of the mind or body); be consciously." ... 4 [LI] to seem to oneself to be: ... 5 [TI,5;V3 to believe, esp. for the moment 6 L7] to give (a sensation): ... 7 [Wv6;10] to (be able to) experience sensations: ... 8 [Wv6;T1] to suffer because of (a state or event): ... 9 {L9 (~ter,/ov)] to search with the fingers rather than with the eyes: ... Figure 3: Fragment of an LDOCE entry This apparent formal syntax for describing gram- matical information in a compact form occasionally breaks down: different classes of error occur in the tagging of word senses. These include, for example, misplaced commas or colon del/miters and occasional migration of other lex/cal information (e.g. usage la- bels) into the grammar code fields. This type of error and inconsistency arises because grammar codes are constructed by hand and no au- tomatic checking procedure is attempted (l~fichiels, 1982). They provide much of the motivation for our in- teractive approach to lexicon development, since any attempt at batch processing without extensive user intervention would inevitably result in an incomulete and inaccurate lexicon. 195 $.2 Making use of the gr-mmar codes The program which transforms the LDOCE grammar codes into lexical entries utilisable by the analyser first produces a relatively theory-neutral representation of the lexical entry for a particular word. As an illnstm- tion of the process of transforming a dictionary entry into a lexical template we show below the mapping of the third verb sense of %elieve" below into a lex- ical entry incorporating information about the gram- matical category, syntactic subcategorisstion frames and semantic type of verb -- for example a label like (Type 20Ralsing) indicates that under the given sense the verb is a two-place predicate and that if it occurs with a syntactic direct object, this will function as the logical subject of the predicate complement. be-lievo ... v 1 [I0J to have a firm religious faith 2 iT1] to consider to be true or hon- est: to be|ices someoaelto helices someoae's reports 8 [TSa,b;VS;X (to be) I, (to be} 7] to hold ss an opinion; suppose: I helices he ha* come. [ He haJ come, I helices. [ "Ham he comer m "I be|ices so.* I I helices ~m to hams ~oae it. I I belleee h~m (to be) hovtest (believe verb (Sense 3) ((Takes NP SBsr) (Type 2)) ((Takes NP NP Inf) (Type 20P~ising)) ((or ((Takes NP NP NP) (Type 20Raisin~)) ((Takes NP NP Auxlnf) (Type 20l~sisins:)) ((or ((Takes NP NP AP) (Type 20Rnisins)) ((Takes NP NP Auxlnf) (Type20Raisin~)) Figure 4: A lexical template derived from LDOCE This resulting structure is a lexical template, de- signed as a formal representation for the kind of syntac- rico-semantic information which can be extracted from the dictionary and which is relevant to a system for automatic morphological and syntactic analysis of En- glish texts. The overall transformation strategy employed by our system attempts to derive both subcategorisation frames relevant to a particular word sense and infor- mation about the semantic nature (i.e. the predicate- argument structure and the logical type) of, especially, verbs. In the main, the code numbers determine a unique subcategorisation. However, such semantic in- formation is not explicitly encoded in the LDOCE grammar codes, so we have adopted an approach at- tempting to deduce a semantic classification of the particular sense of the verb under consideration on the basis of the complete set of codes assigned to that sense. In any subcategorisatlon frame which involves a predicate complement there will be a non-transparent relationship between the superficial syntactic form and the underlying logical relations in the sentence. In these situations the parser can use the semantic type of the verb to compute this relationship. Expanding on a suggestion of Nfichieis (1982), we classify verbs as subject equi (SEqui), object equi (OEqul), sub- ject raising (SRalsing) or object raising (ORulsing) for each sense which has a predicate complement code associated with it. These terms, which derive from Transformational Grammar, are used as convenient labels for what we regard as a semantic distinction. The five rules which are applied to the grammar codes associated with a verb sense are ordered in a way which reflects the filtering of the verb sense through a series of syntactic tests. Verb senses with an lit+IS] code are classified as SRaising. Next, verb senses which contain a [V] or IX] code and one of [D5], [DSa], [De] or [D6a] codes are classified as OEqui. Then, verb senses which contain a IV] or [X l code and a ITS] or [TSa] code in the associated grammar code field, (but none of the D codes mentioned above), are clas- sified as ORalstng. Verb senses with a [VJ or [X(to be)] code, (but no [T5] or [TSa] codes), are classified. as OEquL Finally, verb senses containing a [T2], [T3] or iT4] code, or an [I2], [13] or [I4] code are classified as SEquL Below we give examples of each type; for a detailed description see Boguraev and Briscoe (1987). happen(S) [WvS;/Zd-IS] (Type I SRaising) warn(1) [Wv4;I0;TI:( o~ aca/m~),Sa;D 5a;V3] (Type 3 o~ui) usume(1) [Wv4;Tl,Sa,b',X(to be)l,7] (Type 20Raising) decline(S) [TI,S;10] (Type 2 SZqul) Figure 5: The four semantic types of verb A generic lexical template of the form illustrated in Figure 4 can clearly be directly mapped into a feature duster within the features and feature set declarations used by the dictionary and grammar projects. A coln- parison of the existing entries for ~oelieve ~ in the hand crafted lexicon (Figure 1) and the third word sense for ~believe m extracted from LDOCE demonstrates that much of the information available from LDOCE is of direct utility -- for example the SUBCAT values can be derived by an analysis of the Takes values and the ORaieing logical type specification above. In- deed, we have demonstrated the feasibility (Alshawi et al., 1985) of driving a parsing system directly from the information av~lable in LDOCE by constructing dictionary entries for the PATR-H system (Shieber, 1984). It is also clear, however, that it is unrealistic to expect that on the basis of only the information avail- able in the machine-readable source we will be able to derive a fully fleshed out lexical entry, capable of fulfilling all the run-time requirements of the analy- sis system that the lexicon under construction here is intended for. 3.3 Utility of LDOCE for automatic lexicon generation Firstly, the information recoverable from LDOCE which is of direct utility is not totally reliable. Errors of omission and assignment occur in the dictionary for example, the entry for aconsider" (Figure B) lacks a code allowing it to function in frames with sentential complement (eg. I consider that it is a great honour to be here). The entry for %xpect", on the other hand, spuriously separates two very similar word senses (1 and 5), assigning them different grammar codes. 196 ¢onslde, ... 2 [WvS, X (to be) 1,7; V3 l to regard as; think of in a stated way: I conelder pol •/oo~ (= I regard you a fool). I I consider it ~ great hoaonr to be ~ ~th yon to~v. I ae o~d he con- old, red me (to be) too lazy to be • ~ood worker. I The Shetl~r~d lolandt ~r~ eta- ~ll~ eontldered ~ pa~rt o~ Scotl~ad ......... expect ... 1 [T3,Sa,b] to think (that something will happen): I ezpect (tho~) he'll p~s the ¢z~mination. ] He expects tO/~l the ez~mlaa~ioa. J "Will the come .ooa~" "I ezpect so." ........ S [V3] to believe, hope and think (that someone will do something): The officer egpected /t~e inca tO do their daty is the ¢O~1~ /mtt/e ....... acknowledge ... I [TI,4,S (to) to agree to the truth of; recogniee the fact or ex- istence (of): I ¢~knowledge the trash o~ ~,oar esteemed. J .They o~knowledoed (to ,,e) th~ they were deleted I ~Y ~" knowle~ed ~ei~7 been d~eJe~ed 2 [T1 (a~); X (to be) 1,7] to recognise, accept, or admit (as): ~re warn ~knowJedoed to be t~e beet j~aper, t T~l~y ~knowledoed t/l~moe/gee (to be) deJewted ........ Figure 8: Errors of omission sad assignment in LDOCE Errors like these nitimately cause the transforma- tion program to fail in the mapping of grammar codes to feature clusters. We have limited our use of LDOCE to verb entries because these appear to be coded most carefully. However, the techniques outlined here axe equally applicable to other open class items. Furthermore, since some of the information re- qured is only recoverable on the basis of a comparison of codes within a word sense specified in the source dictionary, additional errors can be introduced. For example, we assign ORatslng to verbs which con- taln subcategorlsatlon frzmes for sentential comple- ment, a noun phrase object and an infinitive comple- ment within the same sense. However, thls rule breaks down in the case of an entry such as %cknowledge ", where the two codes corresponding to different subcat- egorisation frames are split between two (spuriously separated) word senses (Figure 6), and consequently incorrectly assigns OEqui to this verb. The rule con- sequently breaks down and aconsider~ is incorrectly assigned the logical type of an Equi verb. We have tested the classification of verbs into se- mantic types using a verb list of 139 pre-classified items available in various published sources (eg. Stock- well etal., 1973). The overall error rate in the pro- cess of grammar code analysis and transformation was 14~; however, the rules discussed above classify verbs into SRalsing, SEqui and OEqul very successfully. The main source of error comes from the mieclasslfi- cation of ORaising into OEqut verbs. This was con- firmed by another test, involving applying the rules for determining the semantic types of verbs over the 7,965 verb entries in LDOCE. The resulting lists, assign- ing the 719 verb senses which have the potential for predicate complementation into appropriate seman- tic classes, confirm that errors in our procedure are mostly localised to the (mls)application of the ORals- lng rule. Arguably, these errors ~o derive mostly from errors in the dictionary, rather than a defect of the rule; see Boguraev and Briscoe (1987) for further discussion. Secondly, the analysis system requires information which is simply not encoded in the LDOCE entries; for example, the morphological features AT, LAT and BARE_ADJ are not there. This type of feature is crit- ical to the analysis of derivxtional variants, and such information is necessary for the correct application of the word grammar. Otherwise many morphologically productive, but nonexistant, lexical forms will be de- fined and be potentially analysable by the lexicon sys- tem. Therefore, lexical templates are not converted directly to target lexical entries, but form the input to second phase in which errors and inadequacies in the source ~ are corrected. 4 A. methodology and a system for lexicon development In order to provide for fast, simple, but accurate devel- opment of a lexicon for the analysis system we have im- plemented a software environment which is integrated with the transformation program described above and which ofers an integrated morphological generation package and editing facilities for the semi-antomatic production of the target lexicon. The system is de- signed on the a~umption that no machine-readable dictionary can provide a complete, consistent, and to- tally accurate source of lexical information. Therefore, rather than batch process the MRD source, the lexicon development software is based around the concept of semi-automatic and rapid construction of entries, in- volving the continuous intervention of the end user, typically a linguist / lexicographer. In the course of an interactive cycle of develop- ment, a number of entries are hypothesised and auto- matically generated from x single base form. The fam- ily of related surface forms is output by the morpholog- ical gensr~tor, which employs the same word grammar used for inflectional and derivxtlonal morphology by the analysis system and creates new entries by a~iding a/fixes to the base form in legitimate ways. The gen- eration and refinement of new entries is based on re- peated application of the morphological generator to suitable base forms, followed by user intervention in- volving either rejecting, or minimally editing, the sur- face forms proposed by the system. Below we sketch a typical pattern of use. If the user asks the system to create an entry for 'rbelieve', the transformation program described in section 3.2 (see Figure 4) will create an entry which contains all the syntactic information specified in Fig- ure 1. In addition, many surface forms with associated grammatical definitions will be generated automati- cally: cobclievc overbclieve 8ubbelieve believed disbelieve postbclieve unbelieyc bolieveo interbelievo prebelieve underbelieve believer misbelteve rebeltevo believable beltewlng outbeliove s~4believe believal believes Figure 7: Derivational variants of %elieve" The system generates these forms from the base entry in batches and displays the results in syntactic frames associated with subcategorisatlon possibilities. Thees frames, which are used to tap the user's gram° maticality judgements, are as semantically 'bleached' 197 as possible, so that they will be as compatible as poe- sible with the semantic restrictions that verbs place on their arguments. Each possible SUBCAT feature value in the grammar is associated with such frames, for example: SFIN: 0a: 0E: 77~r- ... ~ t~ momma~ ~ some~'.g 7he~ C ... ~ t~r~ to be • vm~-~ 7"a~ C ... ~ t~ ~ ~ so,net~ • 27seg C ... "-I ~to be ~pm~gem Figure 8: Syntactic subcategorisation frames Internally, frames are more complex than illus- trated above. Surface phrasal forms with marked slots in them are associated with more detailed feature spec- ifications of lexical categories which are compatible with the fully ]nstantiated lexical items allowed by the grammar to fill the slots. Such detailed frame speci- fications are automatically generated on the basis of syntactic analysis of sentences made up from the frame phrase skeleton with valid lexical items substituted for the blank slot filler. Figure 9 below shows a fragment of the system's inventory of frames. 7"r~r" ... -1 t~t ~omm~ ;- ao,net~'.g. [! -, V ÷, BAR 0, aGK IN ÷, V -, B~ 2, NFOB~4 NORM, PER 3, PLU ÷, COUNT ÷, CASE NOM], SUBC~? b'FIS] 7'I~C ... "1 ,m'nm.,e to be somet,~/ng. [~ -, V +, BAI O, aGlt [N ÷, V -, BAg 2, NFOa.q liOB/4, PEB. 3, PLU +, COUNT ÷, CASE NOX], S~CA! 0El [N -, V +, B~. 0, IGR [~ +, V -, BAR 2, gFORM ~OB/4, PER 3, PLU +, COUNT +, CASE ~OX], SUBCl? ORI [N -, V'÷, BAR O, IG~, [~ *, V -, BAIL 2, gFOB/4 ~OB/4, PER 3, PLU +, COUNT +, CaSE NOX], suBcI? SE2] ~r-.,. "7 fAen. to be ~ p~o~em, IN -. V *, BJa. O. IGl [![ *. V -. BaR 2. NFO~ NORM, PEX 3, PLU *, COUNT *, CaSE NOHI, su~c~! u,:] [N -, V +. BA.~ O. iGR [N *, V -, B~ 2, ~FOB.q NORH, PER 3, PLU +, COU~T +, CISE ~OX], SU~CA? OR] * 77~ C ... ~ t.~.ze to be ~ pzo~enL [~ -, V *, BAR O, .tGB. [N ÷, V -, B~q. 2, IqFOR/4 NORM. PER 3, PLU ÷, COUNT ÷, CASE NOI4], SU~CAT 0El Figure 9: Complete syntactic frames The system ensures that slots in syntactic frames are filled by surface forms which have the syntactic features the sentence grammar requires. Displaying such instantiated frames provides a double check both on the outright correctness of the surface form and on the correctness of a surface form paired with a partic- ular definition. For example, the user can reject They oeerbelieee that 8orneone is something completely, but The v be[ievem that someone is something is indicative of an incorrect definition, rather than surface form. Syn- tactic frames encoding other 'transformational' possi- bilitlse are often associated with particular SUBCAT values since these provide the user with more helpful data to accept or reject a particular assignment. Thus for example selecting between Raising and OEqui verbs is made easier if the frames for [SUBCAT OR.] are instantiated simultaneously: 7~ ~ so, z~o,w to be ,o,,a~,~ / per~ eomeo,~ to be eo,ne~n¢ 77a~ ~ 0ave to be ~ Vm~,~ / 7hey per~/e t~ere to be ~ pro~n Figure 10: SUBCAT value selection The user has two broad options: to reject a set of frames and associated surface form outright or to edit either the surface form or definition associated with a set of frames. Exercising the first option causes all instances of the surface form and associated syntactic frames to be removed from the screen and from fur- ther consideration by the user. However, this action has no effect on the eventual output of the system, so these morphologically productive but non-existent forms and definitions will still be implicit in the lex- icon and morphology component of the English anal- yser. It is assumed that this overgeneration is harm- less though, because such forms will not occur in ac- tual input. Editing a surface form or associated definition re- suite in a new (non-productive) entry which will form part of the system's output to be included as an in- dependent irregular entry in the target lexicon. If the user edits a surface form, the edited version is substi- tuted in all the relevant syntactic frames. Provided the user is satisfied with the modified frames, a new entry is created with the new surface form, treated as an indivisible morpheme, and paired with the existing definition. Similarly, if the user edits a definition as- sociated with a set of syntactic frames, a new set of frames will be constructed and if he or she is happy with these, a new entry will be created with existing surface form and modified definition. (The English analyeer can be run in a mode where non-productive separate entries are 'preferred' to productive ones.) The user can modify both the surface form and the associated definition during one interaction with a particular potential entry; for example, the definition for ~believal m contains both an incorrect surface form and definition for a nominal form of the base form ~oeUeve =. After the associated syntactic frames are displayed to the user, instead of rejecting the entire entry at this point, he or she can modify the surface form to create a new entry for ~oellef" -- a process which results in the revised syntactic frames: T~ ~ev~ be~evd eo.~o~ to be ao.~'.g Figure I1: Frame-based refinement of %elief" 198 The user now has three options; rejecting the third syntactic frame, or alternatively deleting the associ- ated sub-entry with a [SUBCAT OR] feature defini- tion, followed by confirmation will result in the con- struction of a new entry for the lexicon. The third option, should the user decide that nominal forms never take OR complements, is to edit the morpho- logical rules themselves. This option is more radical and would presumably only be exercised when the user was certain about the linguistic data. The system described so far allows the semi-auto- matic, computer-aided production of base entries and irregular, non-productive derived entries on the ba. sis of selection and editing of candidate surface forms and definitions thrown up by the derivationai generA~ tor. However, this approach is only as good as the initial base entry constructed from LDOCE. If the base entry is inadequate, the predictions produced by the generator are likely to be inadequate too. This will result in too much editing for the system to be much help in the rapid production of a sizeable lexi- con. Fortunately, the system of syntactic frames and editing facilities outlined above can also be used to re- fine base entries and make up for inadequacies in the LDOCE grammar code system (from the perspective of the target grammar). For example, LDOCE en- codes trAusitivity adequately but does not represent systematically whether a particular transitive has a passive form. In the target grammar, there are two SUBCAT values NP and NOPASS which distinguish these types of verb. Therefore, all verbs with a tran- sitive LDOCE code are inserted into the two sets of syntactic frames shown below. When these frames axe iustantiated with particular verbs rejection of one or other is enough to refine the LDOCE code to the ap- propriate SUBCAT value. For example, the instanti- ated frames for "cost n are: liP: IOP~: Thelt C ... -l that Theme ~e C ... "7 ~t&~n Tho,s are co~ by them TA~ r" ... "~ t&U * Tha,s ~re C ... 3 b~ them , Thou ~e co*t bY them Figure 12: The SUBCAT / NOPASS distinction The fact that "cost" does not fit into the NP paw sive (second) frame, behaving in a way compatible with the NOPASS predictions, means it acquires a NOPASS SUBCAT value. Since these frames will be displayed first and the operation changes the base en- try, subsequent forms and definitions generated by the system will be based on the new edited base entry. This example, also highlights one of the inher- ent problems in our approach to lexicon development. Syntactic frames are used in preference to direct pe- rusal of definitions in terms of feature lists to speed up lexicon development by tapping the user's grAmmati- cality judgements directly and to reduce the amount of editing and keyboard input. They also provide the user with a degree of insulation from the technical details of the morphological and syntactic formalism. However, semantically 'bleached' frames can lea~l to confusion when they interact with word sense ambi- guity. For example, aweigh ~ has two senses one of which allows passive and one of which does not (com- pare The baby toaa toeighed by the doctor with * Ten pound6 tuaa t#eighed by the baby). Unfortunately, the syntactic frames given for NP / NOPASS axe not 'bleached' enough because they tend to select the sense of "weigh ~ which does Mlow passive. The example raises wider issues about the integration of some treatment of word meaning with the produc- tion of such a lexicon. These issues go beyond this paper, but the problem illustrated demonstrates that the type of techniques we have described are heuris- tic aids rather than failsafe procedures for the rapid construction of a sizeable and accurate lexicon from s machine-readable dictionary of variable accuracy and consistency. 5 Conclusion Practical natural language applications require vocab- ularies substantially larger than those typically devel- oped for theoretical or demonstration purposes and hand crating these is often not feasible, and certainly never desirable. The ev-Muation of the LDOCE gram- mar coding system suggests that it is sufficiently de- tailed •nd accurate (for verbs) to make the on-llne pro- duction of the syntactic component of lexical entries both viable and labour saving. However, the less than 100% accuracy of the code assignments in the source dictionary suggests that a system using the machine- readable version for lexicon development must embody a methodology allowing rapid, interactive and semi- automatic generation and testing of lexicM entries on a large scale. We have outlined a lexicon development environ- ment, which embodies a practical approach to using an existing MRD for the construction of a substantial computerised lexicon. The system splits the deriva~ tion of target lexical entries into two phases; an au- tomatic transformation of the source data into a for- mMised lexical template containing as much relevant information as can be derived (directly or indirectly), followed by semi-automatic correction and refinement of this template into a set of base and irregular target entries. 6 Acknowledgements This work was supported by research grants (Num- bers GR/D/4217.7 and GR/D/05554) from the LrK Science and Engineering Research Council under the Alvey ProgrAmme. We are grateful to the Longman Group Limited for kindly allowing us access to the typesetting tape of the Longman Dictionary of Con- temporary English for research purposes. 7 References Alshawi, Hiyan; Boguraev, Bran and Brlscoe, Ted (1985) 'Towards a dictionary support environment for a real-time parsing system', Proceeding8 of the ~nd Buropean Conference of the Asseciaitlon /or Corn- putational Linguistics, Geneva, Switzerland, pp. 171- 178 199 Bogursev, Bran; Carter, David and Briscoe, Ted (1987) A m~iti-purpoee inter~ace to an on-llne dictionary, Third Conference of the European Chapter of the Association for Computational Linguistics, Copen- hagen, Denmark Boguraev, Bran and Briscoe, Ted (1987) Large lexi- cons for natural language processing -- exploring the grammar coding system of LDOCE, Computa- tional Linguistics, vol.13 Briscoe, Ted; Grover, Claire; Boguraev, Bran and Car- roll, John (1987) A formalism and en~ronmerd for Me development of a large grammar of English, Tenth International Conference on Artificial Intelligence, Milan, Italy G~dsr, Gerald; Klein, Ewan; Pullum, Geoffrey K.; and Sag, Ivan A. (1985) Gener~ized phr~e Rruc- furs grammar, Oxford: Blackwell and Cambridge: Harvard University Press Grover, Chire; Briscoe, Ted; Carroll, John and Bogu- rasv, Bran (1987, forthcoming) The Alvev natural language toola pro~eet grammar -- a large compu- tationa~ grammar of Engliah, Lanc~ter Papers in Linguistics, Department of Linguistics, University of Lancaster l~vfichieis, A.rchibal (1982) Ezploiting a large dictionarv da~abaae, Ph.D. Thesis, Unlversit~ de Liege, Bel- zium Procter, Paul (1978) Longman ~ctionary of cordempo- vary Engliah, Lonfs~man Group Limited, Harlow and London, England l~tchie, Gr~eme; Pulman, Stephen; Black, Alan and l:tuuel], Graham (1987) A computational frame- work for lexlcal description, Comp~ionai Linguia- tics, vol.13 Russell, Graham; Pulman, Steve; R~tchie, Graeme; and Black, Alan (1986) 'A dlctionaa~/and morpho- logical analyser for english', Procsedinga of the llth International Congreu on Computationag Linguis- tiea, Bonn, Germany, pp. 277-279 Shieber, Stuart (1984) 'The design of a computer lan- guage for linguistic information', Proceedings of the IO~h International Congreaa on Computationa~ Lin- gu~tica, Stanford, California, pp. 362-366 Stockwell, Robert; Schschter, Paul and P~-tee, Bar- bsra (1973) The major 8zmtaetic ~tructure8 of En- glish, Holt, Rinehart and Winston, New York, NY 200
1987
27
Lexica] Selection in the Process of Language Generation 3ames Pustejovsky Department of Computer Science Brandeis University Waltham, MA 02254 617-736-2709 j amespQ brandeis.caner -relay Sergei Nirenburg Computer Science Department Carnegie-Mellon University Pittsburgh, PA. 15213 412-268-3823 sergeiQcad.cs.cmu.edu Abstract In this paper we argue that lexical selection plays a more important role in the generation process than has com- monly been assumed. To stress the importance of lexical- semantic input to generation, we explore the distinction and treatment of generating open and closed cla~s lexical items, and suggest an additional classification of the lat- ter into discourse-oriented and proposition-oriented items. Finally, we discuss how lexical selection is influenced by thematic ([oc~) information in the input. I. Introduction There is a consensus among computational linguists that a comprehensive analyzer for natural language must have the capability for robust lexical disambiguation, i.e., its central task is to select appropriate meanings of lexical items in the input and come up with a non contra- dictory, unambiguous representation of both the proposi- tional and the non-propositional meaning of the input text. The task of a natural language generator is, in some sense, the opposite task of rendering an unambiguous meaning in a natural language. The main task here is to to per- form principled selection of a) lexical items and b) the syntactic structure for input constituents, based on lexical semantic, pragmatic and discourse clues available in the input. In this paper we will discuss the problem of lexlcal selection. The problem of selecting lexical items in the pro- cess of natural language generation has not received as much attention as the problems associated with express- ing explicit grammatical knowledge and control. In most of the generation systems, lexical selection could not be a primary concern due to the overwhelming complexity of the generation problem itself. Thus, MUMBLE con- centrates on gr:~mmar-intensive control decisions (McDon- ald and Pustejovsky, 1985a) and some stylistic considera- tions (McDonald and Pustejovsk'y, 1985b); TEXT (McKe- own, 1985) stresses the strategical level of control decisions about the overall textual shape of the generation output. ~KAMP (Appelt, 1985) emphasizes the role that dynamic planning plays in controlling the process of generation, and specifically, of referring expressions; NIGEL (Mann and Matthiessen, 1983) derives its control structures from the choice systems of systemic grammar, concentrating on grammatical knowledge without fully realizing the 'deli- cate' choices between elements of what systemicists call leto's (e.g., HaLiiday, 1961). Thus, the survey in Cummlng (1986) deals predominantly with the grammatical aspects of the lexicon. We discuss here the problem of lexical selection and explore the types of control knowledge that are neces- sary for it. In particular, we propose different control strategies and epistemological foundations for the selec- tion of members of a) open-class and b) closed-class lex- ical items. One of the most important aspects of control knowledge our generator employs for lexical selection is the non-propositional information (including knowledge about focus and discourse cohesion markers). Our generation system incorporates the discourse and textual knowledge provided by TEXT as well as the power of MUMBLE's grammatical constraints and adds principled lexical selec- tion (based on a large semantic knowledge base) and a control structure capitalizing on the inherent flexibility of distributed architectures. 2 The specific innovations dis- cussed in this paper are: I Derr and McKeown, 1984 and McKeown, 1985, however, discuss thematic information, i,e. focus, as a basis for the selec- tion of anaphoric pronouns. This is a fruitful direction, and we attempt to extend it for treatment of additional discourse-based phenomena. 2 Rubinoff (1986) is one attempt st integrating the tex- tual component of TEXT with the grammar of MUMBLE. This interesting idea leads to a significant improvement in the perfor- mance of sentence production. Our approach differs from this effort in two important repsects. First, in Rubinoff's system the output of TEXT serves as the input to MUMBLE, resulting in a cascaded process. We propose a distributed control where the separate knowledge sources contribute to the control when they can, opportunistically. Secondly, we view the generation process as the product of many more components than the number pro- posed in current generators. For a detailed discussion of these see Nirenburg and Pu~tejovslry, in preparation. 201 I. We attach importance to the question of what the input to a generator should be, both as regards its content and its form; thus, we maintain that discourse and pragmatic information is absolutely essentiM in order for the genera- tor to be able to handle a large class of lexicM phenomena; we distinguish two sources of knowledge for lexicM selec- tion, one discourse and pragznatics-based, the other lexicM semantic. 2. We argue that lexicM selection k not just a side ei~ect of grammatical decisions but rather ~ts to flexibly constrain concurrent and later generation decisions of either lexicM or ~aticM type. For comparison, MUMBLE lexical selections are per- formed after some grammatical constraints have been used to determine the surface syntactic structure; this type of control of the generation process does not seem optimal or su~icient for all generation tasks, although it may be appropriate for on-line generation models; ; we argue that the decision process is greatly enhanced by making lexicM choices early on in the process. Note that the above does not presuppose that the control structure for generation ls to be like cascaded transducers; in fact, the actual system that we are building based on these principles, features a distributed architecture that supports non-rigid decision making (it follows that the lexical and grammatical deci- sions are not explicitly ordered with respect to each other). This architecture is discussed in detail in Nirenburg ~nd Pustejovsky, in preparation. 3. We introduce an important distinction between open- class and closed-class lexical items in the way they are rep- resented as well as the way they are processed by our gen- erator; our computational, processing-oriented paradigm has led us to develop a finer classification of the closed- class items than that tr~litionMly acknowledged in the psycholinguistic literature; thus, we distinguish between discourse oriented closed-class (DOCC) items and propo- sition oriented ones (POCC); 4. We upgrade the importance of knowledge about focus in the sentence to be generated so that it becomes one of the prime heuristics for controlling the entire generation process, including both lexical selection and grammatical phrasing. 5. We suggest a comprehensive design for the concept lex- icon component used by the generator, which is perceived as a combination of a gener'M-purpose semantic knowl- edge base describing a subject domain (a subworld) and a generation-specific lexicon (indexed by concepts in this knowledge base) that consists of a large set of discrimi- nation nets with semantic and pragmatic tests on their nodes. These discrimination nets are distinct from the choo- sers in NIGEL's choice systems, where grammatical knowl- edge is not systematically separated from the lexical se- mantic knowledge (for a discussion of problems inherent in this approach see McDonald, Vaughau and Pustejovsky, 1986); the pragmatic nature of some of the tests, as well ms the fine level of detail of knowledge representation is what distinguishes our approach from previous conceptual generators, notably PHRED (Jscobs, 1985)). 2. Input to Generation As in McKeown (1985,1986) the input to the pro- cess of generation includes information about the discourse within which the proposition is to be generated. In our sys- tem the following static knowledge sources constitute the input to generation: 1. A representation of the meaning of the text to be gener- ated, chunked into proposition-size modules, each of which carries its own set of contextual values; (cf. TRANSLA- TOR, Nirenburg et al., 1986, 1987); 2. the semantic knowledge base (concept lexicon) that contains information about the types of concepts (objects (mental, physical and perceptuM) and processes (states and actions)) in the subject domain, represented with the help of the description module (DRL) of the TRANSLA- TOR knowledge representation language. The organiza- tiona~ basis for the semantic knowledge base is an empir- ically derived set of inheritance networks (isa, m~ie-of, belongs-to, has-as-part, etc.). 3. The specific lexicon for generation, which takes the form of a set of discrimination nets, whose leaves are marked with lex/cal units or lexicM gaps and whose non-leaf nodes contain discrimination criteria that for open-class items are derived from selectional restrictions, in the sense of Katz and Fodor (1963) or Chomsk'y (1965), as modified by the ideas of preference semantics (Wilks, 1975, 1978). Note that most closed.class items have a special status in this generation lexicon: the discrimination nets for them axe indexed not by concepts in the concept lexicon, but rather by the types of values in certain (mostly, nonpropc~itional) slots in input frames; 4. The history of processing, structured Mong the lines of the episodic memory oWaa~zat~on suggested by Kolodner (1984) and including the feedback of the results of actual lexic~l choices during the generation of previous sentences in a text. 202 3. Lexical Classes The distinction between the open- and closed-class lexical unite has proved an important one in psychology and psycholinguistics. The manner in which retrieval of el- ements from these two classes operates is taken as evidence for a particular mental lexicon structure. A recent pro- posal (Morrow, 1986) goes even further to explain some of our discourse processing capabilities in term~ of the prop- erties of some closed-da~ lexicM items. It is interesting that for this end Morrow assumes, quite uncritically, the standard division between closed- and open-cla~ lexical categories: 'Open-class categories include content words, such as nouns, verbs and adjectives... Closed-class cate- gories include function words, such as articles and prepo- sitions...' (op. cir., p. 423). We do not elaborate on the definition of the open-class lexical items. We have, how- ever, found it useful to actually define a particular subset of dosed-class items as being discourse-oriented, distinct from those closed-class items whose processing does not depend on discourse knowledge. A more complete list of closed-class lexical items will include the following: • determiners and demonstratives (a, the, thiJ, tl~t); • quantifiers (most, e~ery, each, all o/); • pronouns (he, her, its); • deictic terms and indexicats (here, now, I, there); • prepositions (on, during, against); . paxentheticals and attitudinal~ (az a matter off act, o~ the contrary); • conjunctions, including discontinuous ones (and, be. r.~e, neither...nor); primary verbs (do, have, be); • modal verbs (shall, might, aurar to); • wh-words (toho, why, how); • expletives (no, yes, maybe). We have concluded that the above is not a homoge- neous list; its members can be characterized on the basis of what knowledge sources axe used to evaluate them in the generation process. We have established two such distinct knowledge sources: purely propositional information and contextual and discourse knowledge. Those closed-class items that are assigned a denotation only in the context of an utterance will be termed discourse-oriented closed class (DOCC) items; this includes determiners, pronouns, indexicals, and temporal prepositions. Those contributing to the propositional content of the utterance will be called proposition-oriented closed-class (POCC) items. These in- clude modals, locative and function prepositions, and pri- mary verbs. According to this classification, the ~definitenees effect" (that is, whether a definite or an intefinite noun phrase is selected for generation) is distinct from general quantification, which appears to be decided on the basis of propositional factors. Note that prepositions no longer form a natural class of simple closed-class items. For ex- ample, in (I) the preposition before unites two entities con- nected through a discourse marker. In (2) the choice of the preposition on is determined by information contained in the propositional content of the sentence. (I) John ate breakfast bet'ore leaving for work. (2) John sat on the bed. We will now suggest a set of processing heuristics for the lexical selection of a member from each lexical class. This classification entails that the lexicon for generation will contain only open-cla~ lexical items, because the rest of the lexical items do not have an independent epistemo- logical status, outside the context of an utterance. The selection of closed-class items, therefore, comes as a result of the use of the various control heuristics that guide the process of generation. In other words, they axe incorpo- rated in the procedural knowledge rather than the static knowledge. 4.0 Lexical Selection 4.1 Selection of Open-Class Items A significant problem in lexical selection of open- class items is how well the concept to be generated matches the desired lexical output. In other words, the input to generate in English the concept 'son's wife's mother' will find no single lexical item covering the entire expression. In Russian, however, this meaning is covered by a single word 'swatja.' This illustrates the general problem of lexlcal gaps and bears on the question of how strongly the con- ceptual representation is influenced by the native tongue of the knowledge-engineer. The representation must be com- prehensive yet flexible enough to accommodate this kind of problem. The processor, on the other hand, must be constructed so that it can accommodate lexical gaps by being able to build the most appropriate phrase to insert in the slot for which no single lexical unit can be selected (perhaps, along the lines of McDonald and Pustejovsky, 1985a). To illustrate the knowledge that bears upon the choice of an open-class lexicM item, let us trace the process of lexicai selection of one of the words from the list: desk, table, dining table, coffee table, utility table. Suppose, dur- ing a run of our generator we have already generated the following p~.-tial sentence: (3) John bought a ...... and the pending input is as partially shown in Figures 1-3. Figure I contains the instance of a concept to be generated. 203 (stol$4 (instance-of 8tel) (coXor black) (size 8m~l) (height average) (:as. auerafe) (a,~e-of ateel) (location-of #~t)) F|~Lre I (stol (i., furniture) (color black brown yellow white) (size amaJl average) (height lOW averGgs high) (was. les~-than-avsmqe averaqe) (aade-of t~ood plastic steel) (Iocatlon-of e~t write sew work) (has-as-pert ( leg leg leg (leg) top) (topolol7 O| (top loS))) Figure 2 Figure 2 contains the representation of the corresponding type in the semantic knowledge base. Figure 3 contains an excerpt from the English generation lexicon, which is the discrimination net for the concept in Figure 2. cuo location-of of eat: cuo height of low: co~ree table avnrqe : dining table ~.te: demk sev: sewing table saw: workbench otherwise: table Figure 3 In order to select the appropriate lexicalization the generator has to traverse the discrimination net, having first found the answers to tests on its nodes in the repre- sentation of the concept token (in Figure 1). In addition, the latter representation is compared with the represen- tation of the concept type and if non-default values are found in some slots, then the result of the generation will be a noun phrase with the above noun as its he~l and a number of ~ljectival modifiers. Thus, in our example, the generator will produce 'bla~.k steel dining table'. 4.2 Selection of POCC Items Now let us discuss the process of generating a propo- sition oriented lexical item. The example we will use here is that of the function preposition to. The observ'4tion here is that if to is a POCC item, the information required for generating it should be contained within the proposi- tional content of the input representation; no contextual information should be necessary for the lexical decision. A~ume that we wish to generate sentence (1) where we axe focussing on the selection of to. (1) John walked to the store. If the input to the gener~tor is (walk (Actor John) (Location "hers') (Source U) (Goal stars23) (TL~o past2) (intention U) (Direction otare~3) ) then the only information necessary to generate the prepo- sition is the case role for the goal, 8tore. Notice that a change in the lexicalization of this attribute would only arise with a different input to the generator. Thus, if the goal were unspecified, we might generate (2) instead of (1); but here the propositional content is different. (2) John walked towards the store. In the complete paper we will discuss the generation of two other DOCC items; namely, quantifiers and primary verbs, such as do and have. 4.2 Selection of DOCC Items: • Generating a discourse anaphor Suppose we wish to generate an anaphoric pronoun for an NP in a discourse where its antecedent was men- tioned in a previous sentence. We illustrate this in Figure 2. Unlike open-cl~s items, pronominals axe not going to be directly a~ociated with concepts in the semantic kn- woledge b~se. Rather, they are generated as a result of decisions involving contextual knowledge, the beliefs of the speaker and hearer, and previous utterances. Suppose, we have alre~ly generated (4) and the next sentence to be generated a.l~o refers to the same individual and informs us that John was at his father's for two days. (1) John, visited his father. (2) He~ stayed for two days. Immediate/ocuz information, in the sense of Grosz (1979) interacts with a history of the previous sentence struc- tures to determine a strategy for selecting the appropriate anaphor. Thus, selecting the appropriate pronoun is an attached procedure. The heuristic for discourse-directed pronomin~ization is as follows: 204 IF: (I) the input for the generation of a sentence includes an instance of an object present in a recent input; and (2) the the previous instance of this object (the po- tential antecedent} is in the topic position; and (3) there are few intervening potential antecedents; and (4} there is no focus shift in the space between the occurrence of the antecedent and the current object instance THEN: realize the current instance of that object as a pro- noun; consult the grammatical knowledge source for the proper gender, number and case form of the pro- noun. In McDonald and Pustejovsky (1985b) a heursitic was given for deciding when to generate a full NP and when a pronoun. This decision was fully integrated into the grammatical decisions made by MUMBLE in terms of realization-classes, and was no different from the decision to make a sentence active or passive. Here, we are separat. ing discourse information from linguistic knowledge. Our system is closer to McKeown's (1985, 1986) TEXT system, where discourse information acts to constrain the control regimen for Linguistic generation. We extend McKeown's idea, however, in that we view the process of lexical selec- tion as a constraining factor i~ geruera/. In the complete paper, we illustrate how this works with other discourse oriented dosed-class items. 5. The Role of Focus in Lexical Selection As witnessed in the previous section, focus is an im- portant factor in the generation of discourse anaphors. In this section we demonstrate that focus plays an important role in selecting non-discourse items as well. Suppose your generator has to describe a financial transaction as a result of which (I) Bill is the owner of a car that previously belonged to John, and (2) John is richer by $2,000. Assuming your generator is capable of representing the ~atical structure of the resulting-English sentence, it still faces an important decision of how to express lexi- cally the actual transaction relation. Its choice is to either use buy or 8ell as the main predicate, leading to either (I) or (2), or to use a non-perspective phrasing where neither verb is used. (1) Bill bought a car from John for $2,000. (2) John sold a car to Bill for $2,000. We distinguish the following major contributing factors for selecting one verb over the other;, (I) the intended perspec- tive of the situation, (2) the emphasis of one activity rather than another, (3) the focus being on a particular individ- ual, and (4) previous lexicalizations of the concept. These observations are captured by allowing/ocu8 to operate over several expression including event-types such as tra~/sr. Thus, the variables at pIw for focus in- dude: • end-of-transfer, • beginning-of-transfer, • activity-of- transfer, • goal-of-object, • source-of-object, • goal-of-money, • source-of-money. That is, lexical/zation depends on which expressions are in focus. For example, if John is the immediate focus (as in McKeown (1985)) and beginning-of-transfer is the current- focus, the generator will lexicalize from the perspective of the sell/ng, namely (2). Given a different focus configura- tion in the input to the generator, the selection would be different and another verb would be generated. 6. Conclusion In this paper we have argued that lexJcal selection is an important contributing factor to the process of gen- eration, and not just a side effect of grammatical deci- s/ons. Furthermore, we claim that open-class items are not only conceptually different from closed-class items, but are processed differently as well. Closed class items have no epistemological status other than procedural attach- ments to conceptual and discourse information. Related to this, we discovered an interesting distinction between two types of closed-class items, distinguished by the knowledge sources necessary to generate them; discourse oriented and proposition-oriented. Finally, we extend the importance of focus information for directing the generation process. 205 References [1] Appelt, Dougla~ Planning Enqlish Sentences, Cam. bridge U. Press. [2] Chomsky, Noam A~pec~ on tM. Theo~ o! $ynt~ MIT Press. [3] Cumming, Susanna, "A Guide to Lexical Acquisi- tion in the JANUS System" ISI Research Report ISI/RR-85-162, Information Sciences Institute, Ma- rina del Rey, California~ 1986a. [4] Cvmming, Stumana, "The Distribution of I.,exic.M Information in Text Generation', presented for Work- shop on Automating the Lexicon, Pisa~ 1986b. [5] Den', K. and K. McKeown "Focus in Generation, COLING 1984 [6] Dowty, David R., Word Meaning and Montague Grammar, D. Reidel, Dordrecht, Holland, 1979. [7] Hall/day, M.A.K. ~Options and functions in the En- gl~h clause m. Brno Studies in Enfli~h 8, 82-88. [8] Jacobs, Paul S., "PHRED: A Generator for Nat- ural Language Interface', Computational Linguis- tics, Volume 11, Number 4, 1085. [9] Katz, Jerrold and Jerry A. Fodor, "The Structure of a Semantic Theory', Language Vol 39, pp.170-210, 1963. [10] Mann, William and Matthiessen, "NIGEL: a Sys- temic Grammar for Text Generation', in Freddle (ed.), Systemic Perspectives on Discoerae, Ablex. [11] McDonald, David and James Pustejovsky, "Descrip- tion directed Natural Language Generation" Pro- ceedings of IJCAI-85. Kaufmann. [12] McDonald, David and James Pustejovsky, "A Com- putational Theory of Prose Style for Natural Lan- guage Generation, Proceedings of the European ACL, University of Geneva, 1985. [13] McKeown, Kathy Tez~ Generatio,~ Cambridge Uni- versity Press. [14] McKeown, Kathy, "Stratagies and Constraints for Generating Natural Language Text ~, in Bolc and McDonald, 1087. [151 Morrow "The Processing of Closed Class Lexical Items', in Cognitive Science 10.4, 1986. [161 Nirenburg, Sergei, Victor Raskin, and Allen Tucker, "The Structure of Interlingua in TRANSLATOR", in Nirenburg (ed.) Machine Translation: Theoret- ical ~nd Afeth~dolofical ls~ttes, Cambridge Univer- sity Pres~. 1987. [17] Wilks, Yorick "Preference Semantics, ~ Artificial In- telligence, 1975. 206
1987
28
CONSTRAINTS ON THE GENERATION OF ADJUNCT CLAUSES Alison K. Huettner* Marie M. Vaughan ** David D. McDonald ** Department of Linguistics * Department of Computer & Information Science ** University of Massacusettts Amherst, Massachusetts 01003 ABSTRACT This paper presents an analysis of a family of particular English constructions, all of which roughly express "purpose". In particular we look at the purpose clause, rationale .clause, and infinitival relative clause. We (1) show that couching the analysis in a computational framework, specifically generation, provides a more satisfying account than analyses based strictly on descriptive linguistics, (2) describe an implementation of our analysis in the natural language generation system MUMBLE-86, and (3) discuss how our architecture improves upon the techniques used by other generation systems for handling these and other adjunct constructions. 1. INTRODUCTION Natural language provides a variety of devices for expressing relations between elements in a text.. Simply positioning two sentences in sequence conveys an implicit relation between them: (1) I bought a book. I'm going to read it on the plane. Clauses may also be joined with explicit lexical connectives: (2) I bought a book so that I could read it on the plane. A few relations may be expressed directly through particular types of subordination of one clause to another1: (3) I bought a book to read on the plane. This latter category is the most cohesive of these three devices, as the adjunct is crucially dependent on the material in the matrix clause for its interpretation (Halliday & Hasan, 1976). However, such structural linking mechanisms are also the most limited in applicability: only certain relations may be expressed 1 We refer here to infinitive clauses which are grammatically related to the main clause as optional adverbials rather than as complements (arguments) to a verb, such as "Floyd wanted to go to the zoo". in this way and complex grammatical constraints must be satisfied. In this paper, we analyze a particular class of structural devices, including the purpose clause (exemplified in 3 above), rationale clause, and infinitival relative, from the perspective of natural language generation. All three constructions express kinds of "purpose": purpose clauses express the use to which someone will put an object that is expressed in the main clause; rationale clauses express the overall intention behind the main clause action; infinitival relatives express the usual function of their NP head.2 We look at what underlying semantic relations license the constructions, the constraints on the syntactic form of the main and adjunct clauses, and the gapping pattern of the arguments of each adjunct. We discuss these as information needed by the generator in order for it to choose and use these devices correctly and discuss at what stages in the generation process the information must be applied. We contrast our analysis with those typically given from the perspective of generative- transformational linguistics, particularly thematic analyses, concluding that an analysis that considers the construction in a particular situation and in terms of a coherent model of the world can capture the constraints more easily. We provide a particular example implemented in the natural language generation system MUMBLE-86 (McDonald, 1984) and show how our analysis may be generalized to similar structural adjunct constructions. We further show that many earlier approaches to generating complex sentences (Derr & McKeown, 1984; Davey, 1974; Kukich, 1985; Mann & Moore, 1981) have architectural limitations that would keep them from handling these types of constructions with any generality. 2 The notion of "purpose" is of course ambiguous between "intention" and "function". 207 2. DESCRIPTION OF THE CONSTRUCTION Before addressing the generation of adjunct infinitive clauses, it is necessary to define our terms and distinguish the different constructions. We will begin by discussing purpose clauses 3 and then contrast them with rationale and infinitival relative clauses. 2.1 Purpose clauses A purpose clause (PC) expresses the purpose or intended use of a particular object which the main clause is in some sense "about". It is attached as a daughter of VP and is fixed in VP final position. It has the following variants, distinguished trivially by the position of the gap: 4 (4) a. I bought the shelfi [e i to hold my cookbooks] b. I bought the cookies i [for Mary to eat ei] e. I bought the cushion i [for Mary to sit on eL1 The sentences in (4) demonstrate that PC has one obligatory gap, which can occur in any of its NP argument positions: subject position, as in (a);5 direct object position, as in (b); or prepositional object position, as in (c). The gap is coreferential with CcontroIled by") the direct object of an SVO main clause, or with the subject of a passive or unaccusative main clause. This pattern of antecedents has been variously characterized as deep structure (direct) objects (Huettner, 1987; implicitly in Rappaport & Levin, 1986); as arguments bearing the thematic role of Theme (Faraci, 1974; Williams, 1980); or as entities whose availability for further manipulation plays a part in the semantics of the sentence (Jones, 1985). A PC with its obligatory gap in non-subject position (like those in 4b,c above) may have an additional subject gap, as shown in (5): (5) lj bought it i [ej to eat ei] This second gap is optional, and the determination of its antecedent is more complex than the controller of the obligatory gap. In (5) the PC subject is coindexed with the matrix (main clause) subject; however, (6) shows that an indirect object takes 3 The purpose clause has also been known as a "re~oactive purpose clause", for example in Jespersen (1940). Jespersen reserves the term "purpose clause" for what we are calling a rationale clause; however, our terminology dates at least from Faraci (1974) and is used by Bach (1982) and Jones (1985) among others. 4 The symbol "e" stands for an empty category, or gap, in an argument position. The subscripts indicate coreference. 5 A PC with a subject gap is often called an objective clause. precedence over the subject as controller for tnls gap: 6 (6) a. I gave it i to Mary k [e k to read ell b. *lj gave it i to Mary k [ej to read ei] When there is no suitable antecedent in the matrix, the optional subject gap will have arbitrary or indefinite reference: (7) a. This box i was purchased [ear b to keep supplies in ell b. These doughnuts i are [ear b to eat ell The set of antecedents for the optional subject gap has been characterized configurationally, as the "closest" NP argument after the obligatory gap has found an antecedent (Chomsky, 1980) ; thematically, as the highest NP argument on a "thematic hierarchy" ranging from Goal to "arbitrary" (Nishigauchi, 1984); and pragmatically, as the person in whose control the Theme is at the time of the action (Ladusaw & Dowty, 1985). 2.2 Rationale Clause Easily confused with the purpose clause is the rationale clause (RatC), also known as an "in order to" clause or result clause. RatC can be distinguished from PC by the fact that RatC permit only subject gaps, whose antecedent is (usually) the matrix subject, rather than its object. Note the ambiguity of the following: (8) Amy Lou i took Mildredj to the zoo ei/j to feed the lions. On the PC reading, Mildred is feeding the lions; on the RatC reading Amy Lou is feeding the lions (possibly using Mildred as lion food). A RatC reading may always be paraphrased with in order, as in (9), to rule out the PC reading: (9) Amy Lou i took Mildredj to the zoo in order el~. j to feed the lions. In contrast with PC, the controller of a RatC gap need not be any argument of the main verb, but can be the matrix predicate as a whole: (10) Mildred was thrown in the lion cage to keep her from talldng. 6 Notice that in sentences like (6a), it is the status of Mary as indirect object which allows it to control the subject gap. Prepositional objects which are not indirect objects cannot be controllers here, as shown in (a) below, while indirect objects which are not prepositional objects may still control the PC subject ((b) below). a. Ij got the bonesifrom Paul k [ej/*k to feed e i to the dog] b. lj gave Mary k this very dull book i [e*j/k to read ei] 208 Further, the RatC subject gap is optional: (i 1) Elroy killed Oscar in order for Sylvia to escape. Finally, RatC are daughters of S, and not VP, and may therefore be preposed alone (12b) or otherwise isolated from the VP (12c): (12) a. Helga carries a hat pin to protect herself. b.To protect herself, Helga carries a hat pin. c.What Helga does to protect herself is carry a hat pin. 2.3 Infinitival Relative Clauses Infinitival Relatives (IR) are superficially very similar to purpose clauses, especially in the patterning of their gaps. Like tensed relatives, they are daughters of NP; if the NP in question is in the VP, IR can be easily mistaken for PC: (13) a. IR: I bought [ a pan i [e to fry omelets in ei] ] b. PC: 1 bought [ apart i ] [e to fry omelets in e i (14) a. IR: Elroy really needs [ a woman i [ e i to hoM his hana l l b. PC: Elroy really needs [ a woman i ] [ e i to hold his hand l IR, like PC, have one obligatory gap in either object (13a) or subject (14a) position, which is controlled, not by the matrix object (as in PC), but by the head of the NP containing the relative (just as in a tensed relative clause). If the obligatory gap is in object position, there may or not be a subject gap as well. This optional subject gap is controlled exactly like the optional gap in a PC. An IR may be distinguished from a PC by making its containing NP the subject of the matrix sentence; PC may not occur in post-subject position. Another test is to make pro-nominal or definite the antecedent of the obligatory gap; IR may only have indefinite heads. 2.4 What the constructions mean Three things are being communicated when one uses a purpose clause: an event of acquisition or use, an object (the thing which is being acquired or used), and the purpose to which the object will be put. That these elements form a deliberate complex and are not independent is made clear by attempting to omit either of the first two elements while retaining the syntactic form that gives the purpose clause its special character. In (15a), the object is not explicit in the matrix clause; in (15b), the matrix does not convey any sense of possession. Both are ungrammatical. (15) a. *I went to the bookstore to read on the plane. b. *Peter read a book for Helga to read on the plane. The infinitival relative, in contrast, has only two elements: an object and its purpose. Furthermore, there is no particular event that this purpose is specific to, i.e. no special relationship between the matrix clause in which the object appears and the purpose expressed by the relative. 7 Consequently, the notion of purpose in an IR is narrower than in a PC, closer to the object's intrinsic function or unmarked use. The rationale clause differs from both the PC and IR by not being object centered at all. Instead, a RatC adjunct expresses the goal which the matrix action was intended to bring about. Note that as the various types of infinitive clauses become less deeply embedded, syntactically speaking, the scope of the expressed purpose becomes wider: from the standard function of an object, expressed within a noun phrase (IR); to the function some agent has imposed on an object, expressed in the verb phrase (PC); to the intended goal of the agent in performing the matrix activity, expressed in an S- level adjunct (RatC). 3. GENERATING THE CONSTRUCTIONS To analyse a construction for generation, we must consider what it means, or, put another way, consider why a speaker would choose to use it, especially given the subtleties of meaning that differentiate it from similar constructions. The next consideration, and the subject of the present section, is how the construction should be situated within the generation process: what decisions, made at what point or points in the process, contribute to the selection and realization of the construction as part of an utterance? We begin with an overview of how decision making is organized in our model of generation. We then look at how a descriptive treatment in terms of thematic roles could be turned into an algorithm for generation, and show that it fails to take advantage of the information that is available at the early stages of generation. A treatment tailored to generation is markedly simpler: creating a PC from a motivated message is easier than describing the end product. 7 An NP containing an infinitival relative clause is characteristically descriptive rather than referential; however, this has more to do with the restrictive nature of the relative than with the content of the matrix. 209 3.1 Decision Making in Generation In generation, unlike comprehension, the speaker's appreciation of his situation, his goals, and the information that he wants to communicate are self-evident, rather than needing to be discovered. The core problem in generation is making decisions: knowing what decisions must be made, what information bears on them, what the alternative choices are and how they are to be represented. Carefully controlling the timing of when specific decisions are made offers the possibility of designing the generation process so as to achieve a very high level of efficiency. Forcing a decision too early before all of the information it requires is available may lead to guessing and later having to back up and undo that choice and any later ones that depended on it. Making a decision too late can mean missing opportunities to propagate information about the choice to other decisions that it should influence. Overall, the most pivotal and least constrained decisions should be made first, so that their consequences can be known soon enough to not hold up the others that are dependent on their choices. In our model of generation, this criterion has led us to the view that decisions about the information an utterance is to convey will be made before decisions about syntactic form or serial order. These early decisions typically include choices of wording and influence all aspects of a text's form. The output of. such decisions is expressed in an explicit representational level we call the "message level" (McDonald & Vaughan, 1987). Decisions reflecting the surface ordering of the arguments are made in the mapping to the next level of representation, the surface structure. As this structure is traversed, decisions about the particular realization of the arguments are made, morphological specialization is done, and the text is output. 3.2 Attempting to Adapt a Descriptive Analysis In conventional transformational=generative analyses, the rules governing the occurrence of gaps in the constructions we are studying are characterized from a purely descriptive perspective. They do not try to determine which argument should be gapped, but rather where gaps may occur and what the antecedent of each gap will be. Directly adapting such an analysis to the generation task would involve complete specification of the surface structure followed by a multi-step matching algorithm to realize the gap(s). Of descriptive analyses, those couched in terms of thematic roles seem best suited for the generation of PC, since they allow a single description of the antecedent of the obligatory gap. A possible algorithm for locating gaps in PC would be as follows (assuming that arguments of the matrix verb are still accessible from within the adjunct and are annotated with their thematic roles): . . Gap the first argument in the PC which is an occurrence of the matrix Theme. a. If the PC subject matches the matrix Goal, gap it; or b. if there is no matrix Goal and the PC subject matches the matrix Source or Location, gap it; or c. if there is no matrix Source or Location either, and the PC subject is given as "unspecified", gap it. While for our purposes such an algorithm is an im-provement over a structural description, it is still unnecessarily complicated. For instance, there is no need to search the matrix clause for its theme since when generating we already know trivially which argument to obligatorily gap--the one that the purpose clause was chosen to express the purpose of. 3.3 Choosing the construction Since, as we have discussed, there are semantic differences among PC, IR and RatC, the choice among them is more than just stylistically motivated syntactic variation. This means that they will be distinguished at the message level, since that is where an utterance's information content is determined. We have also argued that the PC and its matrix clause form a conceptual unit centering around the object whose use is in question. If that integrity is not to be left to chance, that conceptual unit must be chosen as a piece, making the PC an atomic resource that the English language provides, like adjectives or the copular clause. At the message level then, we have a three part relation embodied in a "realization specification" (see example in Section Four), which stipulates that the statement of possession or access to an object and the statement of the purpose of that object are to be realized as main clause and PC respectively, with the occurrence of the object in the PC realized as a trace. The obligatory gap is thus inserted at the message level, and persists into surface structure, where realization of the two clauses as active, passive, etc. can take place without a subsequent costly calculation 210 of which structural position should be realized as a gap. (We will discuss the optional gap below). Since the tense of the adjunct is left unspecified in the realization specification, it will surface as an infinitive. Delaying the realization of the two clauses until the linguistic context governing that realiation has been established provides versatility. For example, the whole construction could be a complement to another verb, as in (16a), or to another infinitival adjunct, such as the rationale clause shown in (16b): (16) a. I wanted to buy a book to read on the plane. b. I went to the bookstore to buy a book to read on the plane. One potential problem with this analysis is that the lack of prior constraint leaves open the possibility of generating rather awkward constructions, such as the following: (17) A book was bought by me to read on the plane. It is our intuition, however, that the awkwardness of this sentence comes from a lack of motivation for the passive rather than any problem with the construction as a whole. Without a motivated source, this construction would never be generated; consequently we need not address how to block it. We can use this sort of argument to great advantage when working in a generation framework, which is one of the reasons why it provides a better model of how language is actually produced than the usual linguistic strategy of free generation with surface level filters. The obligatory gap in the PC can (and should) be handled at the message level because (1) at that point all the information it requires is available, (2) no further information bearing on the identification of the argument to be gapped will become available later during realization (i.e. there is nothing gained by waiting), and (3) the means for carrying out the gapping operation are at hand (see next section). The optional subject gap is a different matter. This gap is licensed only if its antecedent is explicitly mentioned in the main clause, a fact that is not known at the message level. (More to the point, having known the information when the message was being assembled was unlikely to have changed the decisions that were made; consequently there is no utility to making it explicit there.) Since the information needed to consider gapping the PC's subject is not available until the matrix clause has been realized, the gapping operation must be done at the level of surface structure rather than the message level. By relying on the fact that only well- formed, motivated messages are ever going to be constructed, a surface-level rule for the operation can be compactly stated: "gap if the subject is mentioned in the matrix or is arbitrary (and non-emphatic). ''8 The single gap of a rationale clause is handled very much like the optional gap of the purpose clause. The planner is responsible for the overall relationship between an action and an intended result of that action. When the message is converted to a surface structure, it is realized as a main and a subordinate clause; the main clause, as the head of the bundle, is built first, and the RatC is then attached either before or after it. During traversal of the tree, the RatC subject will be gapped if it matches the main clause subject or the main clause as a whole. Once again, the information needed to determine whether to gap is not available until late in the process. 4. EXAMPLE In this section we describe the particulars of our implementation of purpose clauses in the natural language generation system, Mumble. As we discussed in the previous section, this construction originates from a three part relation between an event, an object, and its purpose. At the message level, the interface to Mumble, the schema shown below in Figure One takes these three arguments and builds a realization specification for a purpose clause: define-specification-schema object-centered-event-&-purpose (object event object-purpose) (let ((matrix (instantiate-specification event)) (adjunct (instantiate-specification object-purpose))) (add-further-specification matrix :specification adjunct :attachment-function 'purpose-of) (locate-argument-&-force-to-a-trace object :containing-rspec adjunct) )) FIGURE ONE 8 "Emphatic" refers to both marked stress, as in (a), or an unusual situation, as in (b), where the possessor is not the intended user: a. I bought that dinosaur for me tO play with (so keep your off it: ) b. I bought David a dinosaur for rne to play with (when 1 go over to his house). In a generation model, which assumes the generator is working in the service of some coherent underlying program, the information of when something is emphatic, or marked, is always know and can be made available to the linguistic processes, and would be necessary in any event in order to generate speech. 211 Figure Two shows the pretty printing of the realization specification created by this schema in order to generate the following text: "Floyd bought Helga a book m read on the plane." (event-bundle :head ( : realization-fn buy I : arguments (#<Floyd> #<Helga> #<book> ) ) ~:accessories ( tense-modal past ) : further-specifications ( ( : specification (event-bundle :head ( : realization-fn read : arguments ( #<Helga> (:trace #<book>)) ) : further-specifications (#<on-location #<read ...> #<plane> > ) 5- :attachment-fn purpose-of ) ) ) FIGURE TWO In order to make the example clearer, we have used the" short hand notation #< ... > to indicate an underlying object from which a specification will be planned, rather than writing out its specification in all its detail. In the context of an actual underlying program generating from internally modeled objects, these could be unplanned specifications of objects, with planning and realization interleaved. However, as this example presently runs in our "stand-alone" interface, all the details are spelled out in the realization specification. The bundle representation allows the planner to group component parts of the utterance. The head of the bundle (#1) is a constraint expression specifying the matrix clause. Accessories (#2) contain linguistically marked information, such as tense and NP number. The further-specification field (#3) specifies the adjunct. Note that the argument for #<book> (#4) has already been constrained to be a trace. The attachment function (#5) indicates how the further specification is related to the head. In this instance the attachment function is the particular attachment point PURPOSE-OF (shown in Figure Three), which splices a new element, labeled FOR- INFINITIVE, into the surface structure as the last element of the VP. (define-attachment-point purpose-of :splice :reference-labels (vp) :link (last) :new-slot (for-infinitive) ) FIGURE THREE Every specification has a realization function and a list of arguments. In general, the realization function is a class of choices which defines the set of initial trees (Joshi, 1985) which can realize the specification. The choices are annotated with the grammatical and contextual characteristics which distinguish their use. For example READ (#6), through a curried realization class (shown in Figure Four), uses the class AGENT-VERB-THEME 9 (define-curried-realization-class Read (agent theme) : class agent-verb-theme ((verb "read") )) (define-realization~class Agent-verb-theme (agent verb theme) ( ((basic-clause-svo agent verb theme) (clause) () ) ((for-infinitive-svo agent verb theme) (for-infinitive) () ) ( (relative-clause-svo rel-pro (agent) trace (agent) verb theme) ) (relative-clause) (arg-same-as-head(agent)) ) ((relative-clause-svo tel-pro(theme) agent verb trace(theme)) ) (relative-clause) (arg-same-as-head(theme)) ) )) FIGURE FOUR The message is realized in stages. First, the head of the bundle (#1) is realized by making a choice in its class (similar to that for READ in Figure Four) and building the surface structure representation for that choice, shown below in Figure Five. [SENTENCE] clause ~ attachment-point [SUBJECT] ~-[PREDICATEI purpose-o! #,: FLOYD / vp ~ [VERB] ~ [INDIR-OBJI ._t~[DIR -OBJ] "buy" #,: HELGA ~, #,~ BOOK .~ FIGURE FIVE 9 We use thematic roles as argument names in classes heuristically; we are not committing ourselves at this point to a thematic analysis of argument structure. 212 The accessories (#2) are then processed, which in this example sets the tense. Next the further specification is spliced in using the indicated attachment point. The surface structure is traversed, and embedded arguments are realized as they are reached. The result of these operations and the traversal up to the subject of the adjunct is shown in Figure Six. Note that the text of the main clause has been morphologically specialized and output as a side effect of the traversal, the surface structure for the argument NPs has been chosen and built, and the trace for the obligatory gap is already in place. At this point the optional subject gap of the adjunct is considered. Since "Helga" is available as an explicit argument of the matrix clause, the subject is realized as a trace and the "for" is supressed. 5. RELATED WORK IN GENERATION Derr & McKeown (1984) directly address the generation of complex sentences; however, they restrict the criteria for combining propositions to focus and shared arguments. While it is fairly clear that they could extend their analysis to allow combinations based on relations between propositions that are expressible as explicit lexical connectives, it is unclear as to whether they could as easily extend it to relations expressed structurally: They assume that the propositions are independently determined before possibilities for combinations are considered. While a special device could determine whether the particular relation licensing a PC was intended, they would lose the advantage we gain from letting the initial choice of object and construction be made simultaneously. They would have to use an algorithm such as the one described in the thematic analysis above to determine the gapping pattern of the adjunct. Davey (1974) and Kukich (1985) both simplify their approach to the problem by completely predetermining how propositions may be combined into complex sentences. Kukich uses predefined phrases and Davey a set of rules particular to the annotated move list of the tic-tac-toe game he is generating from. While these approaches provide an opportunity for choosing structures such as purpose clauses early and as one piece, they are seriously lacking in generality and flexibility. Both assume a limited domain where all of the possible propositions and their plausible combinations can be predetermined. In the Knowledge Delivery System (KDS) Mann & Moore (1981) use a hillclimbing algorithm to determine which propositions should be combined into complex sentences. The algorithm assumes the information to be conveyed has been broken into kernel sized chunks and filtered to delete any repetitious or inferable information. This has the drawback that once the original information has been fragmented into kernels, the original relations between them have been lost. The aggregation rules must consequently use shared arguments and predefined templates to combine the kernels into sentence sized chunks. This causes the same problems as those described for Derr & McKeown: determining the gapping pattern in the adjunct clause and retaining generality. [SENTENCE] clause [SUBJECTI --4~ [TNS] ~ [PREDICATE] <past> np S V P ~.,~..,~......,.,..~ [HEAD] [VERBI .....~INDIR-OBJ] ~ [DIR -OBJ] "Floyd" np b-, j [HEADI "Helga" Text output so far: Floyd bought Helga a book [FOR-INFINITIVE] np clause [HEAD] "book" [FOR-SUBJECT] "=---I~[PREDICATEI [VERB] .__.~[DIR -OBJt . . . "head" trace FIGURE SIX 213 6. CONCLUSION In this paper we have shown the importance of carefully choosing the framework in which to couch one's analysis. For the generation of adjunct clauses, a computational approach which assumes a coherent underlying world model and text planner has clear advantages over a descriptive representation. We have also shown advantages of our model of generation: Our use of a message level distinct from and prior to the surface structure representation allows decisions to be made when germane information is most naturally available. 7. REFERENCES Bach, Emmon (1982), "Purpose Clauses and Control." In Jacobson & Pullum, eds., The Nature of Syntactic Representation, Reidel, Dordreeht, pp. 35-57. Chomsky, Noam (1980), "On Binding." Linguistic Inquiry 11.1, MIT Press, Cambridge. Davey, Anthony (1974), Discourse Production. Edinburgh University Press, Edinburgh, U.K. "Using Focus to Generate Complex and Simple Sentences." Proceedings of Coling-84, pp.319-326. Faraci, Robert A. (1974), Aspects of the Grammar of Infinitives and For Phrases. MIT Doctoral Dissertation (unpublished). Halliday, M.A.K. & Ruqaiya Hasan (1976), Cohesion in English.. London: Longman Group Ltd. Huettner, Alison K. (1987), Adjunct Infinitives: An Exegesis. PhD Dissertation, University of Massachusetts, Amherst, Massachusetts, forthcoming. Jespersen, Otto (1940), A Modern English Grammar on Historical Principles. G. Allen & Unwin, London. Jones, Charles (1985), Syntax and Thematics of Infinitival Adjuncts. Phi3 Dissertation, University of Massachusetts, Amherst, Massauchusetts. Joshi, Aravind (1985), "Tree Adjoining Grammars: How much context-sensitivity is required to provide reasonable structural descriptions?" In Dowty, Karttunen, & Zwicky (eds.), Natural Language Parsing, Cambridge University Press, Cambridge. Kukich, Karen (1985), "Explanation Structures in XSEL." Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics, pp. 228-237. Ladusaw, William & David Dowry (i985), "Towards a Formal Semantic Account of Thematic Roles." Unpublished manuscript. Mann, William & James Moore (1981), "Computer Generation of Multi-paragraph English Text." American Journal of Computational Linguistics, Vol. 7, No.l, Jan.-Mar., pp. 17-29. McDonald David D. (1984), "Description Directed Control: Its implications for natural language generation." In Cercone (ed.), Computational Linguistics, Plenum Press, pp. 403- 424; reprinted in Grosz, Sparck Jones, & Webber, Readings in Natural Language Processing, Morgan Kaufman Publishers, California, 1986. McDonald David D. & Marie M. Vaughan (1987), "Arguments for a Message Level in Natural Language Generation." Submitted to UCAI-87. Nishigauchi, Taisuke (I984), "Control and the Thematic Domain." Language 60, Linguistic Society of America, Waverly Press, Baltimore. Rappaport, Malka & Beth Levin (1986), "What to do with Theta Roles." Lexicon Project Working Papers #11, MIT Center for Cognitive Science, Cambridge. Ritehie, Graeme (1984), "A Rational Reconstruction of the Proteus Sentence Planner." Proceedings of Coling-84, pp. 327-329. Williams, Edwin (1980), "Predication." Linguistic Inquiry 11.1, MIT Press, Cambridge. 214
1987
29
SITUATIONS AND INTERVALS 1 Rebecca J. Passonneau Paoli Research Center, UNISYS Defense Systems ~ P.O.Box 517 Paoli, PA 19301 USA ABSTRACT The PUNDIT system processes natural language descriptions of situations and the inter- vais over which they hold using an algorithm that integrates ~peet and ~en~e [ogle. It analyzes the tense and aspect of the real- verb to generate representations of three types of situations-- states, processes and events-- and to locate the situations with respect to the time at which the text was produced. Each situation type has a dis- tinct temporal structure, represented in terms of one or more intervals. Further, every interval has two features whose different values capture the aspectual differences between the three different situation types. Capturing these differences makes it possible to represent very precisely the times for which predications are asserted to hold. 1. Introduetlon This paper describes a semantics of situa- tions and the intervals over which they hold that is neither situation semantics (Barwise and Perry, 1983) nor interval semantics (Dowty, 1979, 1982, 1986; Taylor, 1977). It is unfortunately d~f~cult to avoid the overlap in terminology because what will be described here shares certain goals and assumptions with each. The concerns addressed here, however, arise from the computa- tional task of processing references to situations in natural language text in order to represent what predicates are asserted to hold over what entities and when. Situation as used here pertains to the linguistic means for referring to things in the world, i.e., to sentences or predications. More specifically, situation is the superordinate category in Mourelatos' typology of aspectual classes of predications, schematised in Fig. 1. SITUATIONS I I I STATES OCCURRENCES Preu.re. low. I I I PROCESSES EVENTS Alarm iJ Jounding. Engine/ailed. Fig. 1: Mourelatos' typology of situations The PUNDIT text-processing system s processes references to situations using an algorithm that integrates tense logic (Reichenbach, 1947) with aspect, or what Talmy (1985) calls the~pa~e~w-~ ~--~ d~tribution o~ ac~io, throug~ time. This paper describes how PUNDIT represents the temporal structure of three types of situations, namely states, processes and events, and how these situations are located in time. 2. Problems in Computing Appropriate Representations The critical problems in the semantic analysis of references to situations and their associated times are: I) language encodes several different kinds of temporal information, 2) this information is distributed in many distinct linguistic elements, and finany, 3) the semantic contribution of many of these elements is context-dependent and cannot be computed without looking at co-occurring elements. ZThis work was supported by DARPA under contract N00014-85-C-0012, administered by the Office of Naval Research. APPROVED FOR PUBLIC RELEASE, DISTR/BU- TION UNLIMITED. ZFormerly SDC--A Burroughs Company. zPUNDIT is an acronym for Prolog UNderstands and In- teErates Text. It is a modular system, implemented in Quintus Prolog, with distinct syntactic, semantic and prag- matic components (cf. Dahl et al., 1987). 16 These problems have been addressed as fol- lows. A decision was made to focus on the kinds of temporal information embodied in the verb and its categories of tense, taxis 4 and grammatical aspect, s and to temporarily ignore other kinds of temporal information, s Computation of this infor- mation was then divided into two relatively independent tasks, with appropriate information passed between the modules performlng these tasks in order to accommodate context- dependencies. The first task, carried out by Module 1, makes use of the aspectual information in the verb phrase (lexical and grammatical aspect) to determine the situation type being referred to and its temporal structure. An abstract component of temporal structure, referred to as the event time (following Reichenbach, 1947), serves as input to Module 2, where the deictic information in the verb phrase (tense and taxis) is used to compute temporal ordering relations, i.e., where the situation is located with respect to the time of text production. Section §3 outlines the general goals for computing temporal structure and §4 describes in detail how it is computed. Then §5 briefly illustrates how the event time which Module 1 passes to Module 2 simplifies the interaction of tense and aspect. 8. Goals i~or Representing Situations The goal in generating representations of the temporal structure of situations was to closely link the times at which situations are said to hold with the iexlcal decompositions of the predicates used in referring to them. The decompositions encode aspectual information about the situation types which is used in determlning what type of situation has been referred to and what its tem- poral structure is. Distinct components of the semantic decompositions correspond to different features of the intervals with which they are asso- ciated. As §4 will demonstrate, the interpretation of these components of temporal meaning depends on the interaction between iexical and grammati- cal aspect. 4T4~'i# (Jakobson. 1957) refers to the semantic effect of the presence or absence of the perfect auxiliary. SAzpect is both part of the inherent meaning of a verb (lexical aspect) and also signalled by the presence or absence of the progressive suffix -lag (grammatical aspect). ~E.g., rate (~ven by adverbs like rapidJy), "patterns of frequency or habituation", and so on (of. Mourelatos. 1981). This approach to the compositional seman- tics of temporal reference is similar in spirit to interval semantics. Interval semantics captures the distinct temporal properties of situations by specifying a truth conditional relation between a full sentence and a unique interval (Dowty, 1979, 1986). This is motivated by the observa- tion that the aspectual type of a sentence depends simultaneously on the aspectual class of a particular lexical item, its tense, taxis and grammatical aspect, and the nature of its argu- ments (cf. Mourelatos, 1981; note that the latter factor is not handled here). The goal of PUNDIT's temporal analysis is not simply to sort references to situations into states, processes and events, but more specifically to represent the differences between the three types of situations b~ considering in detail the e]~aracteris~ice o/ ~he set o/ temporal ~nter~z~# t]~Qt ~,sl/ hold or occur oesr (Allen, 1983, p. 132). Thus, instead of specifying truth conditional properties of sentences, the temporal semantics outlined here specifies what property of an interval is entailed by what portion of the input sentence, and then compositionally constructs a detailed representa- tion of a state, proeess or event from the inter- vais and their associated properties. 8.1. Intervals and Their Features Each situation type has a distinct temporal structure comprised of the interval or intervals over which it holds. Two features are associated with each interval, klnesle and boundedness. Very briefly, kinesls pertains to the internal structure of an interval, or in informal terms, whether something is happening within the inter- val. Boundedness pertains to the way in which an interval is located in time with respect to other times, e.g., whether it is bounded by another inter- val. 4. Part One of the Algorithms Computing Temporal Structure The input used to compute the temporal structure of a situation consists of the grammati- cal aspect of the verb, that is, whether it is pro- gressive, and the decomposition produced by PUNDIT's semantic interpreter (Palmer et al., 1986). The lexical decompositions employed by PUNDIT (Passonneau, 1986])) not only represent the predlcate/argument structure of verbs, but in addition, following the example of Dowty's aspect 17 calculus (1979), they represent a verb's inherent temporal properties, or lexlcal aspect. 7 In PUNDIT's lexical entries, there are three values of lexlcal aspect corresponding to the three types of situations. Four of the six possible combinations of grammatical and lexical aspect are temporally distinct. This section will go through the four cases one by one. 4.1. States The following conditional statement sum- marises the first of four cases of temporal struc- ture. The antecedent specifies the necessary input condition, the first clause of the consequent specifies the situation type, the second specifies the k|nesls Of its associated interval and the third specifies its boundedness. IF Lexical Aspect=stative THEN Situation is a state AND its Time Argument is a period AND this period is unbounded As shown here, if the lexical aspect of a predica- tion is stative, its grammatical aspect is irrelevant. The justification for ignoring grammat- ical aspect in the context of lexical stativity appears at the end of this section. A state is defined as a situation which holds for some interval that is both statle and unbounded. Example 1) illustrates a typical reference to a state situation along with its semantic decomposition. Note that the lexlcal head of the verb phrase is the adjective low. 1) The pressure was low. low (patlent([pressurel]) s As in Dowty's aspect calculus (1979), the decompo- sitions of stative predicates consist of semantic predicates with no aspectual operators or connec- tives. Computing the temporal structure associ- ated with 1) means finding a single interval with the appropriate features of kinesis and bounded- ness to associate with the stative predicate low(patlent(X)). rThe literature on upectual classes of verbs provides a variety of diagnostics for determining the inherent upect of verbs (cf. Vendler, 1967; Dowty, 1979). *PUNDIT's current application is to process short messages texts called CASREPS (CASualty REPorts) which describe Navy equipment failures. The arguments in the decompositions, e.g., [preuurel], are unique identifiers of the entities denoted by the surface noun phrues. They are crest- Kinesls of states. A static interval is tem- porally homogeneous. With respect to the relevant predication, there is no change within the interval; consequently, any subinterval is equivalent to any other subinterval. Thus, a static interval is defined much as stative predications are defined in interval semantics: An inter~al I associated with some predication ~b 18 static iff it follows from the truth of ~ at I that ~ is true at all aublnter,Jal8 of I (cf. Dowty, 1986, p. 42). Situations are represented as predicates identify- ing the situation type (e.g., state). The situation denoted by 1) would be represented as follows: state([lowl], low (patient ([pressureT]), period ([Iowl]) The three arguments are: the unique identifier of the situation (e.g., [lowl]), the semantic decompo- sition, and the time argument (e.g., period ([lowl])). The same symbol (e.g.,[lowl]) identifies both the situation and its time argument because it is the actual time for which a situation holds which uniquely identifies it. 0 A period time argu- ment in the context of a state predicate always represents a statie interval. Boundedness of states. The intervals asso- ciated with states are inherently unbounded. A temporal bound can be provided by an appropri- ate temporal adverbial (e.g., The pressure wag ~or- real lwh~ the pump seize~), 10 but here we consider only the temporal semantics specified by the verb form itself. When an unbounded interval is located with respect to a particular point in time, it is assumed to extend indefinitely in both direc- tions around that time. In 1), at least part of the interval for which the predication low(pat|ent([pressurel])) is asserted to hold is located in the past. However, this interval may or may not end prior to the present. The unbounded property of the interval can be illustrated more precisely by examining the relationship between the predication and the temporal adverbial ed by PUNDIT's reference resolution component (Dahl, 1986). SThough a situation is something quite different for Bar- wise and Perry (1983), they take n similar view of the role of a particular space-time location in tokenising a situation type (of. esp. pp. 51ff). Xlln general, temporal adverbials can modify an existing component of temporal structure or add components of tem- poral structure. 18 modifying it in example 2): 2) The pressure was low at 08:00. This sentence asserts that the state of low(patient([pressvwel])) holds at 08:00 and possibly prior and subsequent to 08:00. That is, the sentence would be true if the pressure were low for only an instant coincident with 08:00, but it is not asserted to hold only for that instant. This is captured by defining the interval as unbounded. A situation representation does not itself indicate the boundedness of its period time argument. Instead, this feature is passed as a parameter to the component which interprets tense and taxis (cf. §5). As will be shown in the following section, the progressive assigns the features active and unbounded to non-stative verbs. But with sta- tire verbs, the progressive contributes no temporal information. Inability to occur with the progres- sive has in fact been cited as a diagnostic test of statives, but as Dowry notes (1979), there is a class of progressives which denotes locative states (e.g., The soei~ are l~/ing under the bed). Such sta- tires occur in PUNDIT's current application domain in examples like the following sentence fragment: 3) Material clogging strainer. A complete discussion of the interaction between progressive grammatical aspect and stative lexical aspect would have to address cases in which the progressive contributes non-temporal information (cf. Smith, 1983). However, these issues are not pertinent to the computation of temporal struc- ture. 4.2. Temporally Unbounded Processes The second case of temporal structure involves progressive uses of non-stative verbs, i.e., process or transition event verbs. IF Lexical Aspect~stative AND Grammatical Aspect--progressive THEN Situation is a process AND its Time Argument is a period AND this period is unbounded In this case and the two subsequent ones, both lex- ical and grammatical aspect are relevant input. Processes are situations which hold over active intervals of time. 11 Active intervals can be unbounded or zmApecifled for boundedness, depending on the grammatical aspect of the predi- cation. The two possible temporal structures asso- ciated with processes are discussed in this and the following section. Example 4) illustrates a typical predication denoting a temporally unbounded process along with its semantic decomposition. 4) The alarm was sounding. DO(sound(actor(lal m4]))) DO is an aspectual operator identifying a decom- position as a process predicate (cf. Dowty, 1979). 12 As with statives, computing the temporal struc- ture for sentences llke 4) involves finding a single interval to associate with the semantic decomposi- tion. Kinesls of processes. The presence of a DO operator in a decomposition indicates that the interval for which it holds must be active. Active and static intervals contrast in that change occurs within an active interval with respect to the relevant predication. For example, for any interval for which DO(sound(actor([alarm4]))) is true, the [alarm4] participant must undergo changes that qualify as sounding, and must con- tinue to do so throughout the interval. As Moure~ latos (1981) has pointed out, process predicates vary regarding how narrowly one can subdivide such intervals and still recognize the same process. Dowty has used this threshold of granularity as the defining characteristic of process sentences, and it is borrowed here to define active intervals: An interval [ a~sociated with some predication ~ is aetlve iff it follows from the truth of ~b at I that ~ is true at all subintervals of I down to a certain limit in size. As the process representation for 4) illustrates, processes and states are represented similarly. process([soundS], low (patlent([alarmT]), period ([soundS]) lithe distinction between static and active intervals is useful for interpreting manner adverbials indicating rate of change. Since statics predications denote the absence of change over time, they cannot be modified by rate adverbiak. lZBecause the aspectual operator DO always hu an ac- tor semantic role a~ociated with it, PUNDIT's semantic decompositions actually omit DO and use the presence of the actor role to identify proce~ predicates. 19 The situation predicate identifies the situation type as a process. Note that a period time argument in the context of a process predicate indicates an act|re interval. The rule given above specifies that transl- tlon event verbs in the progressive also denote temporally unbounded processes (cf. 5). 5) The engineer is installing the oU filter. cause(DO (|nstall(agent([englneer 8]))), BECOME(ins lled(theme([mter4]), Iocatlon(X)))) The cause predicate in the decomposition of in~tall indicates that it is a causative verb, and the BECOME operator that its lexical aspect is transition event. This aspectual class is a hetero- geneous one, but in general, transition event verbs are temporally more complex than stative or process verbs, and have a correspondingly more complex relation between their semantic decompo- sitions and temporal structure. Consequently, the discussion of the treatment of progressive transi- tion event verbs is postponed until after the func- tion of the aspectual operator BECOME has been explained. Boundedness. In 6), the interval associated with the alarm sounding is unbounded. It bears the same relationship to the at adverbial phrase modifying the predication as does the statlc inter- val in 2) above, repeated here as 7). 6) The alarm was sounding at 08:00. 7) The pressure was low at 08:00. This siml]arity between statives and progressives has led Vlach (1981) to identify them with each other. Here, the commonality among sentences like 1), 2), 4) and 8) is captured by associating the feature value unbounded both with stative lexi- cal aspect and with progressive grammatical aspect. The differences between the predications in 6) and 7), which show up in the types of modification and anaphorlc processes to which such predications are susceptible, are encapsulated in their contrasting values of klnes|s (cf. fn. 11 above). 4.8. Temporally Unspecified Processes The third case of temporal structure accounts for the differences between sentences like 4), having a process verb in the progressive, and 8), where the process verb is non-progressive. 8) The alarm sounded. The differences, which will be explained below, are captured in the following rule indicating that the actlve interval for which the predication is said to hold is unspecified for boundedness. IF Lexical Aspect=process AND Grammatical Aspect=non-progressive THEN Situation is a process AND its Time Argument is a period AND this period is unmpeeifled Again, the parameter indicating that the interval associated with 8) is unspecified gets passed to Module 2 which interprets tense and taxis. In every other respect, the analysis of the temporal structure associated with 8) resembles that for 4). A comparison of progressive and non- progressive process verbs in the context of an at adverbial phrase illustrates the relative indeter- rninacy of the non-progressive use. In the context of the progressive process verb in 8), the clock time is interpreted as falling within the active interval of sounding but in 9), where the verb is not progressive, 08:00 can be interpreted as falling at the inception of the process or as roughly Iocat- ing the entire process. 9) The alarm sounded at 08:00. Non-progresslve process verbs exhibit a wide vari- ation with respect to what part of the temporal structure is located by tense (Passonneau, 1986a). The influencing factors seem to be pragmatic in nature, rather than semantic. The solution taken here is to characterize the event tlme of such predications as having an unnpecifled relation to the active interval associated with the denoted process. 4.4. Transition Events As mentioned in the previous section, the temporal structure of transition events is more complex than that of states or processes. Correspondingly, the rule which applies to this case has more output conditions. IF Lexical Aspect=transition event A.ND Grammatical Aspect=non-progressive THEN Situatlon=event AND Time Argument---moment AND this moment culminates an interval associated with a process AND this moment introduces an interval associated with a state or process 20 A transltlon event is a complex situation consist- ing of a process which cu]mlnates in a transition to a new state or proeess. Its temporal structure is thus an aetlve interval followed by--and bounded by--a new aet|ve or stat|e interval. The new state or process comes into being as a result of the initial process, is As in Dowty (1986), both Vendler's achieve- ments and his accomplishments collapse into one class, vie., tr~n_Aitlon events. That is, achieve- ments are those ~'nesi8 predicates which are not only typi- eally of shorter duration than accomplishments, but aJso thoee which toe do not normally understand o.8 entailing a sequence o/ sub-events, given o•r e, eaJ every-dal/ criterla for identifying the even~ named by the predicate (Dowty, 1988, p. 43). Causative verbs, in which the action of one parti- cipant results in a change in another participant, are typical accomplishment verbs. 10) The pump sheared the drive shaft. eause(D O(shear (agent (|pumpS|))), BEC OME(sheared(patlent([shai~e])))) Sentence 10) asserts that a process in which the pump participated (shear/ng) caused a change in the drive shaft (being sheared). Note that the decomposition explicitly indicates a causal relation between two conjoined predicates, one represent- ing an activity perfomed by an agent, and the other representing the resulting situation. BECOME serves as the aspectual operator for marking transltlon event decompositions. The argument to BECOME constitutes the semantic decomposition of the new state or process arising at the culmination of the event. Non-causative verbs can also denote transi- tion events. With inchoatives, the same entity participates in both the initial process and the resulting situation denoted by the predication. 11) The engine failed. D O (fall (agent([englnel]))), BE COME (failed(patlent([englnel]))) In 11), an engine is said to participate in some process ~ailing) which culminates in a new state (e.g., being inoperative). The semantic ISA state may be a necessary precondition for a certain change to occur, but since states are defined by the absence of change, or negative klnesla, they are inherently incapable of generating new situations. decompositions used in PUNDIT do not explicitly represent the initial processes involved in transi- tion events because they are completely predict- able from the presence of the BECOME operator. But both conjuncts are shown here to illustrate that computing the temporal structure of a transi- tion event situation requires finding two intervals, one associated with the initial process predicate (e.g., DO(fa|l(agent([eng|nel])))) and the other with the predicate for the resulting situation (e.g., falled(pat|ent([eng|nel]))). As indicated in the rule for this case, the temporal structure also includes a moment of transition between the two intervals, i.e., its tran- sltion bound. Since a trans|t|on event is one which results in a new situation, there is in theory a point in time before which the new situation does not exist and subsequent to which it does. A transition bound is a theoretical construct not intended to correspond to an empirlcal]y deter- mined time. In fact, it should be thought of as the same kind of boundary between intervals implied by Alien's meets relation (Allen, 1983; 1984, esp. p. 128). However, it is a convenient abstraction for representing how transition events are perceived and talked about. We can now return to the question of the interpretation of progressive transition event verbs. In the context of a decomposition with a BECOME operator, the progressive is con- strained to apply to the predicate corresponding to the initial process, i.e., the predicate denoting the portion of a transition event prior to the moment of transition. Computing the temporal structure for the progressive of install in 12), for example, involves generating a single active, unbounded interval for which the predication D O (agent (|engineer 8])) holds: 12) The engineer is installing the oll filter. eause(D O (agent(|engineer 8]), BECOME(irmtalled(theme([fllter4]), Ioeatlon(X))))) In this context, the remainder of the semantic decomposition denotes what the person report.ing on the event assumes to be the eventual culmina- tion of the process referred to as inetalilng. K|nesls. Examples 13) and 14) illustrate two types of transition events, one resulting in a new state, and one resulting in a new process. As illus- 21 trated, ~4 transition events are represented as com- plex situations in which an event with a moment time argument results in a new state or process: 13) The lube oil pump has seized. event([seisel], BECOME(selsed(patlent([pnmpl]))), moment([selsel]) stat ([selse ], seised(patlent ([plm~pl])), period([seise2 D starts(moment ([selsel]),perlod([seise2]) 14) The engine started. event([sta~tl]r BEC OME (operating(actor ([engineX]))), moment([staxtl]) process([startS], opel"atlng(actor ([eng|nel])), period([start2]) starts(moment([startl]),perlod([start2]) The starts relation indicates that a transition bound (e.g., moment([selsel])) is the onset of the interval (e.g., perlod([selse2D) associated with the situation resulting from a tran~itlon event. Boundedness. An important role played by the transition bound is that it serves as the tem- poral component of transition events for locat- ing them with respect to other times. For example, the sentence in 15) asserts that the moment of transition to the new situation coincides with the clock time of 8:00. 15) The engine failed at 8:00. The status of the engine prior to 8:00 is asserted to be different from its status afterwards. 5. Part Two of the Algorithms Temporal Ordering Relations PUNDIT employs a Reichenbachian analysis of tense which temporally locates situations in terms of three abstract times: the time of the situation (event alms), the time of speech/text production (speech time), and the time with xtAt present, PUNDIT explicitly represents only two components of transition event predications: the moment at- sociated with an event of becoming, and a period associated with a resulting situation. This representation has been found to be adequate for the current application. The omission of the first interval is purely a matter of practical convenience, but could easily be reprelented should the need arise. respect to which relational adverbials are inter- preted (reference time). Reichenbach (1947) did not distinguish between the temporal structure of a situation and its event tlme. In PUNDIT, event time is a carefully defined abstract com- ponent of temporal structure in terms of which ordering relations are specified. It is determined on the basis of boundedness, and is always represented as a dimensionless moment. 5.1. Event Time The three values of boundedness outlined above correspond to three possible relations of event time to a time argument. Examples 16) through 18) illustrate these relations. If an inter- val is unbounded, its event time is represented as an arbitrary moment iz~lmled within the period time argument: 16) The pressure is low. Boundedness: unbounded Event time: ~/1 such that |ncludes(period(~owl])jnoment(~VIl])) For an interval unspecified for boundedness the event time /ms a non-committal relation to the interval, i.e., it may be an endpoint of or included within the period time argument: 17) The alarm sounded. Boundedness: unspecified Event time: M l such that has(period([soundl]),moment (~vfs])) The moment time argument of a transition event is id~e=~/~/to its event time. Identity, or the lack of referential distinctness, is handled through Pro- log unification. 18) The engine failed. Boundedness: transition hound Event time: M l unifies with moment([fail1]) Defining these three different relations of event time to temporal structure simplifies the computa- tion of the ordering relations given by the perfect and non-perfect tenses. 5.2. Temporal Ordering RelatlonB The event time computed in Module 1 and the verb's tense and taxis comprise the input used in computing temporal ordering relations. Due to the pragmatic complexity of the perfect tenses and 22 to space ]~m~tatlons, neither referenee time nor taxis is discussed here (but cf. Passonneau, 1986a). The rules for the past and present tenses are quite simple. They locate the event time as coincident with or prior to the time of text production (i.e., the Report Time): IF Tense=present AND Taxis/non-perfect THEN coincide(Event Time, Report Time) IF Tense=past AND Taxis/non-perfect THEN precedes(Event Time, Report Time) These two rules in combination with the different relations of event time to the temporal structures of situations makes it possible to capture impor- tant facts about the interaction of tense and aspect. For example, present tense denotes an actual time only when applied to unbounded intervals. Thus a reference to an actual situation is computed for sentences like 19) hut not 20). 19) The engine is failing. 20) The engine fails. In 20), the present tense pertains not to a specific event of engine failure, but rather to the ten- dency for this type of situation to recur. A predication denoting a past unbounded situation can be followed by a predication assert- ing the continuation or cessation of the same situation: 21) The pump was operating at 08.~0 and is still operating. A single interval would be generated for the two clauses in 21). However, a similar assertion follow- ing a predication with a transition event verb in the simple past is contradictory if still is inter- preted as indicating persistence of the same event. Is 22) ?The pump sheared the drive shaft and is still shearing it. The event time for the first conjunct in 22) is a moment necessarily culminating in a new situa- tion (i.e., a state of being sheared). Since the transition bound is dimensionless, the adverb still cannot refer to its persistence. A predication evoking an unspecified interval in a similar ISAnother reading of 22) refers to a uniqe event followed by iterations of the same type of event. context can be interpreted analogously to either 21) or 22): 23) The pump operated at 08.~0 and is still operating. The non-commlttal relation of event time to tem- poral structure for unspecified intervals makes both interpretations of 23) possible, and selecting among them is undoubtedly a pragmatic task rather than a semantic one. As we will see next, the utility of distinguishing between unbounded and lm~peeifled process predications is especially apparent in the context of temporal adverbials. 6. Coneluslom Adverbial Modification The representations described above were inspired by remarks found in the literature on tense and aspect to the effect that the time sche- mata (Vendler, 1967) associated with different situations are crucial to the way we perceive and talk about them. One of the crucial types of evi- dence used in deriving PUNDIT's temporal seman- tics as the interpretation of temporal adverbials in different contexts (Passonneau, 1988a). Conse- quently, one of the advantages to the representa- tions is that they make it possible to tailor the interpretation of a temporal adverb to the tem- poral structure of the modified situation. For example, specifying a different relation for the event time of an active interval, depend- ing on its boundedness, yields different temporal relations between the situations described in sen- tences llke 24-26), as shown informally in the examples. 24) The pump failed when the engine was rotat- ing. transition o/failure during period of rotation 25) The pump failed when the engine rotated. tran~itlon o/failure during OR at one endpoi,~t of period o/rotation 26) The engine rotated when the pump failed. Same =a ~S) Sentences like 25) and 26) are often interpreted with the process (e.g., rotation) beginning at or after the transition event moment (e.g., failure). PUNDIT's representations of the temporal seman- tics of predications are explicit enough yet sufficiently non-commlttal to provide suitable input to a pragmatic reasoner that could decide these cases. 23 Acknowledgements I would like to thank Martha Palmer, Lynette Hirschman, Bonnie Webber and Dehbie Dahl for their comments, encouragement and patience. REFERENCES Allen, James F. 1984. Towards a general theory of action and time. AI 23: 123-154. Allen, James. F. 1983. 1V~aintaining knowledge about temporal intervals. ACI~ 25.11:832-843. Barwise, Jon and John Perry. 1983. Situations and Att|tudes. Cambridge, Massachusetts: The MIT Press. Dahl, Deborahl. 1986. Focusing and reference reso- lution in PUNDIT. Presented at AA.~. Philadel- phia, PA~ Dahl, Deborah; Dowding, John; Hirschman, Lyuette; Lang, Francois; Linebarger, Marcia; Palmer, Martha; Passonneau, Rebecca; Riley, Leslie. 1987. Integrating Syntax, Semantics, and Discourse: DARPA Natural Language Understanding Program. Final Report May, 1985--May, 1987. Dowty, David R. 1986. The effects of aspectual class on the temporal structure of discourse: semantics or pragmatics? Linguistics and Philo- sophy 9: 37-61. Dowty, David R. 1979. Word Meaning and Mon- tague Grammar. Dordrecht: D. Reidel. Jakobson, Roman. 1971 [1957]. Shifters, verbal categories and the Russian verb. In his Selected Writings, "v'ol. 2, pp. 130-147. The Hague: Mou- ton. Mourelatos, Alexander P. D. 1981. Events, processes, and states. In Tedeschi and Zaenen, pp. 191-212. Palmer, 1Vlartha; Dahl, Deborah A.; Schiffman, Rebecca J. ~)assonneau]; H~rschman, Lynette; Linebarger, Marcia; Dowding, John. 1986. Recovering Implicit Information. 24th Annual Meeting of the ACL. Columbia University, New York. Passonneau, Rebecca. 1986a. A Computational Model of the Semantics of Tense and Aspect. Logic-Based Systems Technical Memo No. 43. Paoli Research Center. SDC. December, 1986. Passonneau, Rebecca. 1986b. Designing Lexical Entries for a Limited Domain. Logk-Based Sys- tems Technical Memo No. 42. Paoli Research Center. SDC. November, 1986. Reichenbach, Hans. 1947. Elements of Symbolic Logic. New York: The Free Press. Talmy, Leonard. 1985. Lexicalisation patterns. In Language Typology and Syntactic Description, vol. 3: Grammatkal Categories and the Lexicon, pp. 57-151. Edited by Timothy Shopen. Cam- bridge: Cambridge University Press. Taylor, Barry. 1977. Tense and continuity. Linguistics and Philosophy 1. Tedeschi, P. J. and A. Zaenen, eds. 19810 Syntax and Semantics, vol 14: Tense and Aspect. New York: Academic Press. Vendler, Zeno. 1967. Verbs and times. Linguistics in Philosophy. New York: CorneU University Press. "v'lach, Frank. 1981. The Semantics of the pro- gressive. In Tedesch] and Zaenen, pp. 271-292. 24
1987
3
A Model For Generating Better Explanations Peter van Beek Department of Computer Science University of Waterloo Waterloo, Ontario CANADA N2L 3G1 Abstract Previous work in generating explanations from advice- giving systems has demonstrated that a cooperative sys- tem can and should infer the immediate goals and plans of an utterance (or discourse segment) and formulate a response in light of these goals and plans. The claim of this paper is that a cooperative response may also have to address a user's overall goals, plans, and preferences among those goals and plans. An algorithm is intro- duced that generates user-specific responses by reasoning about the goals, plans and preferences hypothesized about a user. 1. Introduction What constitutes a good response? There is general agreement that a correct, direct response to a question may, under certain circumstances, be inadequate. Pre- vious work has emphasized that a good response should be formulated in light of the user's immediate goals and plans as inferred from the utterance (or discourse seg- ment). Thus, a good response may also have to (i) assure the user that his underlying goal was considered in arriving at the response (McKeown, Wish, and Matthews 1985); (ii) answer a query that results from an inappropriate plan indirectly by responding to the underlying goal of the query (Pollack 1986); (iii) pro- vide additional information aimed at preventing the" user from drawing false conclusions because of violated expectations of how an expert would respond (Joshi, Webber, and Weischedel 1984a, 1984b). The claim of this paper is that a cooperative response can (and should) also address a user's overall goals, plans, and preferences among those goals and plans. We wish to show that an advice seeker may also expect the expert to respond in light of, not only the immediate goals and plans of the user as expressed in a query, but also in light of (i) previously expressed goals or prefer- ences, (ii) goals that may be inferred or known from the user's background, and (iii) domain goals the user may be expected to hold. If the expert's response does not consider these latter type of goals the result may mislead or confuse the user and, at the least, will not be cooperative. As one example, consider the following exchange between a student and student-advisor system. User: Can I enroll in CS 375 (Numerical Analysis)? System: Yes, but CS 375 does involve a lot of FOR- TRAN programming. You may find Eng 353 (Technical Writing) and CS 327 (AI) to be useful courses. The user hopes to enroll in a particular course to help fulfill his elective requirements. But imagine that in the past the student has told the advisor that he has strong feelings about not using FORTRAN as a pro- gramming language. If the student-advisor gives the simple response of "Yes" and the student subsequently enrolls in the course and finds out that it involves heavy doses of FORTRAN programming, the student will probably have justifiably bad feelings about the student- advisor. The better response shown takes into account what is known about the user's preferences. Thus the system must check if the user's plan as expressed in his query is compatible with previously expressed goals of the user. The system can be additionally cooperative by offering alternatives that are compatible with the user's preferences and also help towards the user's intended goal of choosing an elective (see response). Our work should be seen as an extension of the approach of Joshi, Webber, and Weischedel (1984a, 1984b; hereafter referred to as Joshi). Joshi's approach, however, involves only the stated and intended (or underlying) goal of the query, which, as the above example illustrates, can be inadequate for avoiding misleading responses. Further, a major claim of Joshi is that a system must recognize when a user's plan (as expressed in a query) is sub-optimal and provide a better alternative. However, Joshi leaves unspecified how this could be done. We present an algorithm that produces good responses by abstractly reasoning about the overall goals and plans hypothesized of a user. An explicit model of the user is maintained to track the goals, plans, and preferences of the user and also to record some of the background of the user pertinent to the domain. Together these provide a more general, extended method of computing non-misleading 215 responses. Along with new cases where a response must be modified to not be misleading, we show how the cases enumerated in (Joshi 1984a) can be effectively computed given the model of the user. We also show how the user model allows us to compare alternatives and select the better one, all with regards to a specific user, and how the algorithm allows the responses to be computed in a domain independent manner. In sum- mar),, computing a response requires, among other things, the ability to provide a correct, direct answer to a query; explain the failure of a query; compute better alternatives to a user's plan as expressed in a query; and recognize when a direct response should be modified and make the appropriate modification. 2. The User Model Our model requires a database of domain dependent plans and goals. We assume that the goals of the user in the immediate discourse are available by methods such as specified in (Allen 1983; Carberry 1983; Litman and Allen 1984; Pollack 1984, 1986). The model of a user contains, in addition to the user's immediate discourse goals, fiis background, higher domain goals, and plans specifying how the higher domain goals will be accomplished. In the student-advisor domain, for example, the user model will initially contain some default goals that the user can be expected to hold, such as avoiding failing marks on his permanent record. It will also contain those goals of the user that can be inferred or known from the system's knowledge of the user's background, such as the attainment of a degree." New goals and plans will be added to the model (e.g. the student's preferences or intentions) as they are derived from the discourse. For example, if the user displays or mentions a predilection for numerical analysis courses this would be installed in the user model as a goal to be achieved. 3. The Algorithm Explanations and predictions of people's choices in everyday life are often founded on the assumption of human rationality. Allen's (1983) work in recognizing intentions from natural language utterances makes the assumption that "people are rational agents who are' capable of forming and executing plans to achieve their goals" (see also Cohen and Levesque 1985). Our algo- .'-ithm reasons about the user's goals and plans according to some postulated guiding principles of action to which a reasonable agent will try to adhere in deciding between competing goals and methods for achieving those goals. If the user does not "live up" to these prin- ciples, the response generated by the algorithm will include how the principles are violated and also some alternatives that are better (if they exist) because they do not violate the principles. Some of these principles will be made explicit in the following description of the algorithm (see van Beek 1986 for a more complete description). The algorithm begins by checking whether the user's query (e.g. "Can I enroll in CS 375?") is possible or not possible (refer to figure 1). If the query is not possible, the user is informed and the explanation includes the reasons for the failure (step 1.0 of algorithm). Alterna- tive plans that are possible and help achieve the user's intended goal are searched for and presented to the user. But before presenting any alternative, the algo- rithm, to not mislead the user, ensures that the alterna- five is compatible with the higher domain goals of the user (step 1.1). If the query is possible, control passes to step 2.0, where the next step is to determine whether the stated goal does, as the user believes, help achieve the intended goal. Given that the user presents a plan that he believes will accomplish his intended goals, the sys- tem must check if the plan succeeds in its intentions (step 2.1 of algorithm). As is shown in the algorithm, if the relationship does not hold or the plan is not execut- able, the user should be informed. Here it is possible to provide additional unrequested information necessary to achieve the goal (cf. Allen 1983). In planning a response, the system should ensure that the current goals, as expressed in the user's queries, are compatible with the user's higher domain goals (step 2.2 in algorithm). For example, a plan that leads to the attainment of one goal may cause the non-attainment of another such as when a previously formed plan becomes invalid or a subgoal becomes impossible to achieve. A user may expect to be informed of such consequences, particularly if the goal that cannot now be attained is a goal the user values highly. The system can be additionally cooperative by sug- gesting better alternatives if they exist (step 2.3 in algo- rithm). Furthermore, both the definitions of better and possible alternatives are relative to a particular user. In particular, if a user has several compatible goals, he should adopt the plan that will contribute to the greatest number of his goals. As well, those goals that are valued absolutely higher than other goals, are the goals to be achieved. A user should seek plans of action that will satisfy those goals, and plans to satisfy his other goals should be adopted only if they are compatible with the satisfaction of those goals he values most highly. 216 (1.0) (1.1) (1.2) (2.0) (2.1) (2.2) (2.3) Check if original query is possible. Case 1: { Original query fails } Message: No, [query] is not possible because ... If ( 3 alternatives that help achieve the intended goal and are compatible with the higher domain goals ) then Message: However, you can [alternatives] Else Message: No alternatives Case 2: { Original query succeeds } Message: Yes, [query] is possible. If not ( intended goal ) then Message: Warn user that intended goal does not hold and explain why. If ( 3 alternatives that do help achieve the intended goal and are also compatible with the higher domain goals ) then Message: However, you can [alternatives] Else Message: No alternatives Else If ( stated goal of query is incompatible with the higher domain goals ) then Message: Warn user of incompatibility. If ( 3 alternatives that are compatible with the higher domain goals and also help achieve the intended goal ) then Message: However, you can [alternatives] Else Message: No alternatives Else If ( 3 alternatives that also meet intended goal but are better than the stated goal of the query ) then Message: There is a better way ... Else { No action } Figure 1: Explanation Algorithm 4. An Example Until now we have discussed a model for generating better, user-specific explanations. A test version of this model has been implemented in a student-advisor domain using Waterloo UNIX Prolog. Below we present an example to illustrate how the algorithm and the model of the user work together to produce these responses and to illustrate some of the details of the implementation. Given a query by the user, the system determines whether the stated goal of the query is possible or not possible and whether the stated goal will help achieve the intended goal. In the hypothetical situation shown in figure 2, the stated goal of enrolling in CS572 is pos- sible and the intended goal of taking a numerical analysis course is satisfied 1. The system then considers the background of the user (e.g. the courses taken), the background of the domain (e.g. what courses are offered) and a query from the user (e.g, "Can I enroll in CS572?"), and ensures that the goal of the query is com- patible with the attainment of the overall domain goal. In this example, the user's stated goal of enrolling in a particular course is incompatible with the user's higher I Recall that we are assuming the stated and intended goals are supplied to our model. This particular intended goal, hy- pothetically inferred from the stated goal and previous discourse, was chosen to illustrate the use of the stated, in- tended, and domain goals in forming a best response. Tile case of a conflict between stated and intended goal would be handled in a similar fashion to the conflict be~'een stated and domain goal, shown in this example. 217 Scenario: The user asks about enrolling in a 500 level course. Only a certain number of 500 level courses can be credited towards a degree and the user has already taken that number of 500 level courses. Stated goal: Intended goal: Domain goal: Enroll in the course. Take a numerical analysis course. Get a degree. User: Can I enroll in CS 572 (Linear Algebra)? System: Yes, but it will not get you further towards your degree since you have already met your 500 level requirement. Some useful courses would be CS 673 (Linear Programming) and CS 674 (Approximation). Figure 2: Example from student advisor domain domain goal of achieving a degree because several preconditions fail. That is, given the background of the user the goal of the query to enroll in CS572 will not help achieve the domain goal. Knowledge of the incom- patibility and the failed preconditions are used to form • the first sentence of the system's response. To suggest better alternatives, the system goes into a planning stage. There is stored in the system a general plan for accomplishing the higher domain goal of the user. This plan is necessarily incomplete and is used by the system to track the user by instantiating the plan according to the user's particular case. The system con- siders alternative plans to achieve the user's intended goal that are compatible with the domain goal. For this particular example, the system discovers other courses the user can add that will help achieve the higher goal. To actually generate better alternatives and to check whether the user's stated goal is compatible with the user's domain goal, a module of the implemented sys- tem is a Horn clause theorem prover, built on top of Waterloo Unix Prolog, with the feature that it records a history of the deduction. The theorem prover generates possible alternative plans by performing deduction on the goal at the level of the user's query. That is, the goal is "proven" given the "actions" (e.g. enroll in a course) and the "constraints" (e.g. prerequisites of the course were taken) of the domain. In the example of figure 2, the expert system has the following Horn clauses in its knowledge base: course (cs673, numerical) course (cs674. numerical) Figure 3 shows a portion of the simplified domain plan for getting a degree. Consider the first clause of the counts_.for_credit predicate. This clause states that a course will count for credit if it is a 500 level course and fewer than two 500 level course have already been counted for credit (since in our hypothetical world, at most two 500 level courses can be counted for credit towards a degree). The second clause is similar. It states the conditions under which a a 600 level course can be counted for credit. get_degree(Student, Action) <- receive_credit(Student, Course, Action); getdegree(Student, []); receive credit (Student, Course, Action) <- counts_for_credit (Student, Course), enrolled (Student, Course, credit, Action), dowork (Student, Course), passing_grade (Student, Course); receive_credit (Student, Course, Action) <- enrolled (Student, Course, credit, []), enrolled (Student, Course, incomplete, Action), complete_work (Student, Course), passing_grade (Student, Course); counts_for_credit (Student, Course) <- is_500_level (Course). 500_level_taken (Student, N), It (N, 2); counts for credit (Student, Course) <- is_600_level (Course). 600_level_taken (Student, N), It (N, 5); Figure 3: Simplified domain plan for course domain. The domain plan is then employed to generate an appropriate response. The clauses can be used in two ways: (i) to return an action that will help achieve a goal and (ii) to check whether a particular action is a possible step in a plan to achieve a goal. In the first use, the Action parameter is uninstantiated (a variable), the theorem prover is applied to the clause, and, as a result, the Action parameter is instantiated with an action the user could perform towards achieving his goal. In the second case, the Action parameter is bound to a particular action and then the theorem prover is applied. If the proof succeeds, the particular action is a valid step in a plan; if the proof fails, it is not valid and 218 the history of the deduction will show why. In this example, enrolling in CS673 is a valid step in a plan for achieving a degree. Recall that the system will generate alternative plans even if the user's query is a valid plan in an attempt to find a better solution for the user. The (possibly) multi- ple alternative plans are then potential candidates for presenting to the user. These candidates are pruned by ranking them according to the heuristic of "which plan would get the user further towards his goals". Thus, the better alternatives are the ones that help satisfy multiple goals or multiple subgoals 2. One way in which the sys- tem can reduce alternatives is to employ previously derived goals of the user such as those that indicate cer- tain preferences or interests. In the course domain, for instance, the user may prefer taking numerical analysis courses. For the example in figure 2, the suggested alternatives of CS673 and CS674 help towards the user's goal of getting a degree and the user's goal of taking numerical analysis courses and so are preferable 3. 5. Joshi Revisited The discussion in the previous section showed how our model can recognize when a user's plan is incompa- tible with his domain goals and present better alternative plans that are user-specific. Here we present examples of how our model can generate the responses enumerated by Joshi. The examples further illustrate how the addition of the user's overall goals allows us to compare and select better alternatives to a user's plan. Figure 4 shows two different responses to the same question: "Can I drop CS 577?" The student asking the question is doing poorly in the course and wishes to drop it to avoid failing it. The goals of the query are passed to the Prolog implementation and the response gen- erated depends on these goals, the information in the model of the user, and on external conditions such as deadlines for changing status in a course. For example purposes, the domain information is read in from a file (e.g. consult(example_l)). Figure 3 shows the clausal representation of the domain goals and plans used in this example (the representations for the goal of avoid- ing a failing mark are not shown but are similar). 2 Part of our purpose is to characterize domain independcnt criteria for "bettemess". Domain dependent knowledge could also be used to further reduce the alternatives displayed to the user. For example, in the course domain a rule of the form: "A mandatory, course is preferable to a non-mandatory course", may help eliminate presentation of certain options. 3 Note that in this example the user's intended goal also in- dicates a preference. Other user preferences may have been previously specificed: these would be used to influence the response in a similar faslfion. % % Can Ariadne drop CS 577? % ?consult(example_l); ? query(changestatus(ariadne, 577, credit, nil), not fail(ariadne, 577, Action)); Yes, change_status(ariadne, 577, credit, nil) is possible. But, not fail(ariadne, 577, _461) is not achieved since... is_failing(ariadne, 577) However, you can ... change_status(ariadne, 577, credit, incomplete) This will also help towards receive_credit % % Can Andrew drop CS 577? % ?consult (exam pie...2); query(changestatus(andrew, 577, credit, nil), not_fail(andrew, 577, Action)); Yes, changestatus(andrew, 577, credit, nil) is possible. But, there is a better way ... change_status(andrew, 577, credit, incomplete) Because this will also help towards receive_credit Figure 4: Sample responses Example 1: In this example, the stated goal is possible, but it fails in its intention (dropping the course doesn't enable the student to avoid failing the course). This is case 2.1 of the algorithm. The system now looks for alternatives that will help achieve the student's intended goal and determines that two alternative plans are possi- ble: the student could either change to audit status or take an incomplete in the course. The plan to take an incomplete is presented to the user because it is con- sidered the best of the two alternatives; it will allow the student to still achieve another of his goals: receiving credit for the course. Example 2: Here the query is possible (the student can drop the course) and is successful in its intention (drop- ping the course does enable the student to avoid failing the course). The system now looks for a better alterna- tive to the student's plan of dropping the course (case 2.3 of algorithm) and determines an alternative that achieves the intended goal of not failing the course but also achieves another of the student's domain goals: receiving credit for the course. This better alternative is then presented to the student. 219 6. Future Work and Conclusion Future work should include incorporation of existing methods for inferring the user's goals from an utterance and also should include a component for mapping between the Horn clause representation used by the pro- gram and the English surface form. An interesting next step would be to investigate com- bining the present work with methods for varying an explanation from an expert system according to the user's knowledge of the domain. In some domains it is desirable for an expert system to support explanations for users with widely diverse backgrounds. To provide this support an expert system should also tailor the con- tent of its explanations according to the user's knowledge of the domain. An expert system currently being developed for the diagnosis of a child's learning disabilities and the recommendation of a remedial pro- gram provides a good example (Jones and Poole 1985). Psychologists, administrators, teachers, and parents are all potential audiences for explanations. As well, members within each of these groups will have varying levels of expertise in educational diagnosis. Cohen and Jones (1986; see also van Beck and Cohen) suggest that the user model begin with default assumptions based on the user's group and be updated as information is exchanged in the dialogue. In formulating a response, the system determines the information relevant to answering the query and includes that portion of the information believed to be outside of the user's knowledge. We have argued that, in generating explanations, we can and should consider the user's goals, plans for achieving goals, and preferences among these goals and plans. Our implementation has supported the claim that this approach is useful in an expert advice-giving environment where the user and the system work cooperatively towards common goals through the dialo- gue and the user's utterances may be viewed as actions in plans for achieving those goals. We believe the present work is a small but nevertheless worthwhile step towards better and user-specific explanations from expert systems. 7. Acknowledgements This paper is based on thesis work done under the supervision of Robin Cohen, to whom I offer my thanks for her guidance and encouragement. Financial support is acknowledged from the Natural Sciences and Engineering .Research Council of Canada and the University of Waterloo. 8. References Allen, J. F., 1983, "Recognizing Intentions from Natural Language Utterances," in Computational Models of Discourse, Ed. M. Brady and R. C. Berwick, Cambridge: MIT Press. Carberry, S., 1983, 'Tracking User Goals in an Information-Seeking Environment," Proceedings of National Conference on Artificial Intelligence, Wash- ington, D.C. Cohen, P. R. and Levesque, H. J., 1985, "Speech Acts and Rationality," Proceedings of ACL-85, Chicago, Ill. Cohen, R. and Jones, M., 1986, "Incorporating User Models into Expert Systems for Educational Diag- nosis," Department of Computer Science Research Report CS-86-37, University of Waterloo, Waterloo, Ont. Jones, M. and Poole, D., 1985, "An Expert System for Educational Diagnosis Based on Default Logic," Proceedings of the Fifth International Conference on Expert Systems. and Their Applications, Avignon, France. Joshi, A., Webber, B., and Weischedel, R., 1984a, "Living up to Expectations: Computing Expert Responses," Proceedings of AAAI-84, Austin, Tex. Joshi, A., Webber, B., and Weischedel, R., 1984b, "Preventing False Inferences," Proceedings of COLING-84, lOth International Conference on Compu- tational Linguistics, Stanford, Calif. Litman, D. J. and Allen, J. F., 1984, "A Plan Recogni- tion Model for Subdialogue in Conversations," University of Rochester Technical Report 141, Rochester, N.Y. McKeown, K. R., Wish, M., and Matthews K., 1985, "Tailoring Explanations for the User," Proceedings of IJCAI-85, Los Angeles, Calif. Pollack, M. E., 1984. "Good Answers to Bad Questions: Goal Inference in Expert Advice-Giving," Proceed- ings of CSCSI-84, London, Ont. Pollack, M. E., 1986, "A Model of Plan Inference that Distinguishes Between the Beliefs of Actors and Observers," Proceedings of ACL-86, New York, N.Y. van Beck, P., 1986, "A Model for User-Specific Expla- nations from Expert Systems/' M. Math thesis, pub- lished as Department of Computer Science Research Report CS-86-42, University of Waterloo, Waterloo, Ont. van Beck, P. and Cohen, R., 1986, ''Towards User- Specific Explanations from Expert Systems," Proceed- ings of CSCSI-86, Montreal, Quc. 220
1987
30
Expressing Concern Computer Science Division University of California at Berkeley Berkeley, CA 94720 U.S.A. Abstract A consultant system's main task is to provided helpful advice to the user. Consultant systems should not only find solutions to user problems, but should also inform the user of potential problems with these solutions. Expressing such potential caveats is a difficult process due to the many potential plan failures for each particular plan in a particular planning situa- tion. A commonsense planner, called KIP, Knowledge Inten- sive Planner, is described. KIP is the planner for the UNIX Consultant system. KIP detect potential plan failures using a new knowledge structure termed a concern. Concerns allow KIP to detects plan failures due to unsatisfied conditions or goal conflict. KIP's concern algorithm also is able to provide information to the expression mechanism regarding potential plan failures. Concern information is passed to the expression mechanism when KIP's selected plan might not work. In this case, KIP passes information regarding both the suggested plan and the potential caveats in that plan to the expression mechanism. This is an efficient approach since KIP must make such decisions in the context of its planning process. A concern's declarative structure makes it easier to express than procedural descriptions of plan failures used by earlier sys- tems. Marc Luria Computer Science Department Technion, Israel Institute of Technology Halfa Israel (a2) Let me know if the door is locked. (a3) Be careful walking down the stairs. (a4) Make sure to turn off the basement light. In (al), the mother has provided the child with information about the location of his shoe. The mother has also implied the use of a plan: Walk down to the basement and get your shoes. However, there are a number of problems inherent in this plan. The mother might also inform her child of these problems. The first problem, (a2), is that one of the condi- tions necessary to execute the plan might be unsatisfied. The door to the basement might be locked. If it is locked addition- al steps in the plan will be necessary. The second problem, (a3), is that executing the walk-down-the-stairs plan might result in a fall. The mother knows that this outcome is likely, due to her experience of the child's previous attempts at the walk-down-the-stairs plan. The mother wishes to prevent the child from falling, since this is a potentially dangerous and frightening experience for the child. The third problem, (a4), is that the child might forget to turn off the light in the base- ment. This would threaten the mothers's goal of preventing the basement light from burning out. However, the same parent might not add: I. Introduction The most important task of a consultant is to provide advice to a user. Human consultants are asked to provide answers to user queries in domains within which they have more expertise than the user. In some cases, the answers pro- vided to the user are basic information about a particular domain. However, in many cases, the task of the consultant is provide answers to user problems. Furthermore, they are not only asked to find solutions, they are also asked to use their expertise to anticipate potential problems with these solutions. Let us consider a very simple example of a consultant rela- tionship. For example, suppose a child asks the following question: (a) Where is my shoe? His mother might respond: (al) It's in the basement. However, his mother might also add: (a5) Let me know if the door needs to be oiled (a6) Be careful walking in the basement (a7) Make sure to close the basement door This second set of responses also provide advice that reflects problems due to unsatisfied conditions of the plan or potential goal conflicts. However, the mother might not decide to ex- press these statements to the child since they are either unlike- ly or unimportant causes of potential plan failure. Therefore, the mother has made three decisions. First, she has decided which plan to suggest to the child based on his world knowledge. Secondly, she has decided which parts of that plan should be expressed to the child. Thirdly, she has decided which potential caveats in that plan should be ex- pressed to the child based on her experience. Previous research in intelligent user interfaces (Allen84, Appelt85, McDonald84) has focused on the second decision. Systems attempt not to violate Grice's second Maxim of Quantity: Make your contribution as informative as is re- quired (Grice 1975). These systems formulated a response 221 that would provide information or a plan to the user. Allen sought to discover obstacles in the user's plan. He tried to help the user's plan by providing the user with the information he needed to execute that plan. However, he did not provide a mechanism for expressing plan failures. In this paper, we focus on the problem of making decisions regarding those po- tential problems which should be expressed to the user. How- ever, rather than using a separate mechanism for this purpose, we propose that this decision be made using information pro- vided by the problem solving component of the system, the planner. We describe a commonsense planner called KIP, Knowledge Intensive Planner. KIP is being developed for UC, the UNIX Consultant system (Luria85, Wilensky 84a, 86). UC provides solutions to user problems in the UNIX operating system domain. KIP provides the information necessary in order to make decisions regarding which poten- tial plan failures should be expressed to the user. KIP must make decisions regarding potential plan failures in order to devise a good plan for the user. Rather than use a separate process to make decisions about those potential plan failures which should be expressed to the user, KIP provides this in- formation to the expression mechanism directly. In the next section, we give some examples of KIP's interaction with ex- pression mechanism. KIP provides information about poten- tial plan failures which will be expressed to the user. We also describe KIP's role in the UNIX Consultant system. In the following section, we describe a declarative representation that allows KIP to detect and provide expression information about potential plan failures. 2. KIP Examples (b) User: UC: How do I print Jim's file on the lineprinter? Print Jim's file foo by typing ipr foo. This plan will not work if you don't have read permission on Jim's file or if the printer is out of paper. (c) User: UC: How do I move a file named paul to the file named mary? To move the file paul to the file named mary, type mv paul mary. However, if the file mary exists, it will be deleted. (d) User: UC: How do I change my password? To change your passwd use the passwd command. However, remember that if you change your password on one machine, it will not be changed on other machines. In each of these examples, KIP has selected a known plan for accomplishing the goals of the user. However, in each of these examples, KIP determines that the plan could fail and therefore has decided to express this potential failure to the user. KIP has a large knowledge-base of information about the UNIX operating system. Decisions regarding UC's own actions are made by UCEgo. The parser and goal analyzer (Mayfield 86) of UC pass KIP a set of goals, and KIP wies to find appropriate plans for those goals. KIP determines a plan for the problem, and notes which potential plan failures should be expressed to the user. KIP passes this decision-making in- formation to the UCExpression mechanism(Chin86, Wilen- sky86). The expression mechanism decides how to express the plan to the user, given a model of the user's knowledge about UNIX. The plan is then passed to the natural language generator, which generates a natural language response to the user. UC is a conversational system, and if necessary KIP can query the user for more information. Nevertheless, KIP tries to provide the best plan it can with the information provided by user. 3. Concerns In the previous sections, we have described the impor- tance of informing the user about potential problems with a plan. In this section, we describe a new concept which we have introduced, termed a concern. A concern allows KIP to predict potential plan failures and provide knowledge to ex- press potential plan failures to the user. A concern refers to those aspects of a plan which should be considered because they are possible sources of plan failure. A concern describes which aspects of a plan are likely to cause failure. There are two major types of concerns, condition con- cerns, and goal conflict concerns. These two types reflect the two major types of plan failure. Condition concerns refer to those aspects of a plan that are likely to cause plan failure due to a condition of the plan that is needed for successful execu- tion. The conditions about which KIP is concerned are always conditions of a particular plan. (These are fully described in Luria86, 87a). Goal conflict concerns refer to those aspects of a plan which are likely to cause plan failure due to a potential goal conflict between an effect of a plan and a goal of the user. Goal conflict concerns relate plans to user goals and to other pieces of knowledge that are not part of the plan. Examples of this knowledge include background goals which may be threatened by the plan. Since these background goals are not usually inferred until such a threat is perceived, goal conflict concerns often refer to conflicts between a potential plan and a long-term interest of the user. Interests are general states that KIP assumes are important to the user. An interest differs from a goal in that one can have interests about general states of the world, while goals refer to a concrete state of the world. For example, preserving the contents of one's files is an in- terest, while preserving the contents of the file named filel is a 222 goal. KIP's knowledge-base includes many interests that KIP assumes on the part of the user. Goals are generated only when expressed by the user, or by KIP itself during the plan- ning process. Stored goal conflict concerns refer to concerns about conflicts of interest. These are concerns about the selected plan conflicting with an interest of the user. If KIP detects a conflict-of-interest concern, then KIP must determine if it should infer an individual goal on the part of the user that reflects this interest. If KIP decides to infer this individual goal, then a dynamic concern between the selected plan and the individual goal is also instantiated. (Goal conflict are described more fully in Luria87b.) Some plan failures are more likely to occur than others, and some plan failures are more important than others if they do occur. The representation of concerns reflects this differ- ence by assigning a varying degree of concern to the stored concerns in the knowledge base. The degree of a condition concern reflects both the likelihood that the condition will fail, and the importance of satisfying the condition for the success- ful execution of the plan. There are many factors that deter- mine the degree of concern about a confiict-of-interes~. The planning knowledge base designer needs to determine how likely a conflicting effect is to occur, how likely it is that the user holds the threatened goal, and how important this goal is to the user. In the present implementation of KIP, information re- garding concerns of potential plans is supplied by a human ex- pert with a great deal of UNIX experience. Stored concerns are therefore, a way for the planner database designer to ex- press his personal experience regarding those aspects of a stored plan that are most likely to fail. In principle, however, the information might be supplied by an analysis of data of ac- tual UNIX interactions. 4. Concerns and Expression In this section, we describe the problems that concerns were initially meant to address in plan failure detection. We also describe how this same process has been used to express potential plan failures to the user. KIP is a a commonsense planner ONilensky83) - a planner which is able to effectively use a large body of knowledge about a knowledge-rich domain. Such knowledge includes a general understanding of planning strategy, detailed descriptions of plans, the conditions necessary for these plans to execute successfully, and descriptions of those potential goal conflicts that the plans might cause. Due to the detailed nature of this knowledge, it is difficult to detect potential plan failures. Condition failures are hard to detect since there are many conditions for any particular plan. Goal conflict failures are difficult to detect since any of the many effects could conflict with any of the many goals of the user. Furthermore, many of the user goals are not inferred until a threat to user in- terest is perceived. Previous planning programs (Fikes71, Newel172, Sacerdoti74) searched exhaustively among every condition and every potential goal conflict for potential plan failure. This is a very inefficient process. On the other hand, human consultants generally consider only a few potential plan failures while assessing a particular plan. Additionally, KIP may not be aware of the values of many of the conditions of a particular plan. Most previous planning research assumed that the values for all the condi- tions is known. However, in UC, when a user describes a planning problem which is then passed to KIP, the values for many conditions are usually left out. All users would believe that normal conditions, like the machine being up, would be assumed by the consultant. A naive user might not be aware of the value of many conditions that require a more sophisti- cated knowledge of UNIX. An expert user would believe that the consultant would make certain assumptions requiring this more sophisticated knowledge of UNIX. It would be undesir- able to prompt the user for this information, particularly for those values which axe not important for the specific plonning situation. Therefore, concerns were introduced in order to detect plan failures. Concerns allow KIP to use information about the likelihood and importance of potential plan failures. They allow the planning database designer to store knowledge re- garding which conditions are most likely to be unsatisfied, and which goal conflicts are most likely to occur as a result of the execution of a particular plan. Furthermore, the same concern information can be used in order to determine which potential plan failures should be expressed to the user. When, KIP selects a potential plan, the concerns of that particular plan are evaluated in the particular planning situation. Once the concerns of a plan are evaluated there are three possible scenarios. In the first case, none of the concerns are important in the particular planning situation. The plan is generated to the user without any concern infor- mation. In the second case, there is a moderate degree of con- cern regarding the plan. In this case, the plan is generated along with the concern information. In cases where there is a high degree of concern, the plan is modified or a new plan is selected. These scenarios will be fully explained in the fol- lowing section. Before describing KIP's algorithm regarding decisions about concerns, we first describe a simple example of the use of concerns. For the purposes of this example, we consider only condition concerns. 5. An Example of the Use of Concerns The simplest use of concerns addresses the problem of specifying which conditions of a particular plan are important enough invoke the planner's concern. For example, suppose the user asks the following question: (e) How do I print out the file named george on the laser printer? KIP is passed the goal of printing the file named george on the laser printer. In this case, KIP's knowledge-base con- 223 rains a stored plan for the goal of printing a file, namely, the USE-LSPR-COI~gclAND plan. KIP creates an instance of this plan, which it calls USE-LSPR-COMMANDI. KIP must then evaluate the USE-LSPR-COMMAND1 plan in order to determine if the plan is appropriate for this particular planning situation. This process entails the examination of those conditions likely to cause failure of this plan. In order to examine these conditions, KIP looks at the stored concerns of the stored plan, USE-LSPR-COMMAND. For each of the stored concerns of the stored plan, it creates a dynamic concern in this individual plan, USE-LSPR- COMMANDI. KIP examines the USE-LSPR-COM~'IAND plan, and finds that two of its many conditions are cause for con- cern: (i) the printer has paper (2) the printer is online The most likely cause of plan failure involves (1), since the paper runs out quite often. Therefore, (1) has a moderate de- gree of concern, and (2) has a low degree of concern. KIP considers the most likely concerns first. These concerns are called stored condition concerns, because the failure of these conditions" often causes the failure of USE-LSPR-COMMAND. KIP therefore creates dynamic concerns regarding the paper in the printer, and the printer being online. KIP then must evaluate each of these dynamic concerns. In this particular example, there is no explicit information about the paper in the printer or the printer being online. Therefore, KIP uses the default values for the concerns them- selves. KIP's concern about paper in the printer is high enough to warrant further consideration. Therefore, this con- tern is temporarily overlooked. However, the concern about the printer being online is disregarded. Its degree of concern is low. It is not a very likely source of plan failure. Since there are no other dynamic concerns for this particular plan, KIP looks back at its overlooked concern. Since this is the only concern, and the degree of concern is moderate, KIP de- cides that this concern should not be elevated to a source of plan failure. Rather, KIP decides to express this concern to the user. KIP assumes that, except for this concern, the plan will execute successfully. The plan is then suggested to the user: (E) UC: To print the file george on the laser printer, type lpr -Plp george. This plan will not work if the printer is out of paper. There are many other conditions of the USE-LSPR- COMMAND plan that KIP might have considered. For exam- ple, the condition that the file exists is an important condition for the lpr command. However, KIP need not be concerned about this condition in most planning situations, since it is un- likely that this condition will cause plan failure. Hence such conditions are not stored in the long term memory of KIP as stored concerns. 6. KIP's Concern Treatment Algorithm In the following section, we describe the part of KIP's algorithm that decides what to do with concerns once they have been evaluated. KIP's entire algorithm for determining the concerns of a particular plan is fully described in (Luria86) and CLuria87ab). Once KIP has evaluated a particular dynamic concern of a particular plan, it can proceed in one of three ways, depend- ing on the degree of that particular concern. If the degree of concern is low, KIP can choose to disregard the concern. Disregard means that the concern is no longer considered at all. KIP can u'y to modify other parts of the plan, and suggest the plan to the user with no reference to this particular con- tern. If the degree of concern is high, KIP can choose to elevate the concern to a source of plan failure. In this case, KIP determines that it is very likely that the plan will fail. KIP tries to fix this plan in order to change the value of this condition, or tries to find another plan. The most complex case is when the degree of concern is moderate. In this case, KIP can choose to disregard the con- cern, or elevate it to a source of plan failure. KIP can also choose to overlook the concern. KIP then evaluates each of the concerns of a particular plan. It addresses all of the concerns which have been elevat- ed to a a source of plan failure. KIP thus develops a complete plan for the problem by satisfying conditions about which it was concerned, and resolving goal conflicts about which it was concerned. Once KIP has developed a complete plan, it is once again faced with the need to deal with the overlooked concerns. If the plan will work, except for the overlooked concerns, KIP can again choose to disregard the concern. If there are a number of overlooked concerns KIP may choose to elevate one or more of these overlooked concerns to a source of plan failure. The plan is then modified accordingly, or a new plan is selected. At this point, KIP can also choose to suggest an answer to the user. Any, overlooked concerns are then expressed to the user in the answer. Furthermore, if the concern has been elevated to a source of plan failure, and no other acceptable plan has been found, KIP can choose to suggest the faulty plan to the user, along with the potential caveats. The concern information is based on default knowledge that assumed by KIP. Therefore, the plans may work if these defaults are not correct even if there are concerns in the particular planning situation. Also, the user may decide that he is not concerned about particular plan failure. For example, KIP may have mid the user about a potential deleterious side effect. The user may decide that this side effect is not that important if it occurs. This corresponds to a human consultant, who, when faced with a problem he cannot solve, gives the user a potentially faulty plan with the explanation of the potential caveats. This is more informative ' 224 for the user than just saying that he doesn't know. 7. Advantages of Concerns Thus, concerns are used by KIP to decide how the plan- ning process should proceed and how to decide which answer is expressed. In this section, we describe a few more exam- pies of KIP's behavior In these examples, we also refer to a new type of concern called a violated default concern. These concerns are accessed by KIP whenever it realizes that a de- fault has been violated. In this way, KIP can use knowledge from default concerns when there is no knowledge that de- faults have been violated. However, when planning in novel situations, general violated default concerns are accessed. Consider the following examples: (f) How do I edit the file anyfile? (g) How do I edit Jim's file jimfile? (h) How do I edit the file groupfile which is shared by my group? One of KIP's main concerns in any of the possible edit- ing plans is the write permission of the file. If the user tries to edit a file on which he does not have write permission, the plan will fall. In (f), this concern is inherited from the edit plan with a relatively low degree of concern. According to the default case, the file belongs to the user and he has write per- mission on the file. Since there is no infortnation about the write permission of the file, the default must be assumed and this concern is disregarded. KIP would therefore return a plan of (F) To edit the file named anyfile, use vi anyfile. In (g), KIP realizes that the default of the file belong- ing to the user is violated. Due to this default violation, a violated default concern of having write permission on the file is created. This concern of write permission is therefore evaluated by the default mechanism. Since there is a very good chance that the plan will not work, this concern about write permission of the file is elevated to a cause of plan failure. Once a condition is a cause of plan failure, KIP must deal with the plan failure. KIP c/n suggest a plan for chang- ing the condition or try some new plan. In this case, since there is no way to change the write permission of Jim's file, another plan is chosen. (G) In order to edit Jim's file, copy the file to your directory and then use vi filename to edit the file. In (h), KIP also realizes that the default of the file be- longing to the user has been violated. However, the default value for write permission of this file is different because the file belongs to the user's group. There is a good chance that the user does have write permission on the file. However, since there still is some chance that he does not have group write permission, there is still some concern about the condi- tion. In this case, since the degree of concern is moderate, KIP can choose to overlook the concern, and suggest the plan to the user. However, the concern is sdli high enough that the answer expression mechanism (Luria 82ab), might choose to express the concern to the user. The answer to (h) would therefore be: (H) To edit the file groupfile, use vi groupfile. However, it might not work, if you don't have write permission on this particular group file. KIP can therefore use concerns to select a potential plan which has a moderate likelihood of success. KIP can express the plan and its reservations regarding the plan to the user. By temporarily overlooking a concern, KIP may search for other plan failures of a particular plan or other potential plans. KIP can accomplish this without completely disregarding a con- cern or elevating the concern to a source of certain plan failure. 8. Implementation and Representation KIP is implemented in Zetalisp on a Symbolics 3670. Concepts are represented in the KODIAK knowledge represen- tation language (Wilensky84b). In particular, knowledge about UNIX commands has been organized in complex hierar- chies using multiple inheritance. Therefore, when searching for stored default concerns of a particular plan that uses a par- ticular UNIX command, KIP must search through a hierarchy of these commands. This is also true when looking for default violations. KIP searches up the hierarchy, and retrieves the stored concerns or default violations in this hierarchy. Stored condition concerns are presently implemented by creating a different CONCERN concept for each concern. Also, a HAS-CONCER~ relation is added between each concern and those conditions which are cause for concern. Degrees of con- cern are implemented by creating a HAS-CONCERN-LEVEL re- lation between the particular concern and the degree of con- cern. Degrees of concerns are presently implemented as numbers from one to ten. Dynamic condition concerns are implemented as instances of these stored concerns. Stored goal conflict concerns are presendy implemented by creating a different CONCERN concept for each concern. Also, a 3-way HAS-CONCERN relation is created between each concern, the conflicting effect and the threatened interest or goal which are cause for concern. Defaults are implemented in the current version of KIP by attaching default values of conditions to the plans them- selves. Context-dependent defaults are implemented by ex- ploiting the concretion mechanism of UC, which tries to find the most specific concept in the hierarchy. Therefore, since KIP retrieves the most specific plan in the knowledge-base, it automatically retrieves the most specific defaults. Violated default concerns are implemented by creating a different VIOLATED-DEFAULT-CONCERN concept for each violated default concern. A HAS-VIOLATED-DEFAULT- 225 CONCERN relation is added between the concern and the stored default which is violated. Therefore, when KIP has found the default that has been violated, it looks for the violat- ed default concerns that are referenced by this default. Particular concerns have been entered into the database of UNIX plans through a KODIAK knowledge representation acquisition language called DEFABS. These concerns are all based on my experience using UNIX and on discussions I have had with other UNIX users in our research group. We are currently investigating a way to enter this concern infor- mation, using the UCTeacher program (Martin, 1985) a natur- al language knowledge acquisition system. Eventually, KIP may incorporate a learning component that would allow KIP to detect the frequency of certain plan failures and to store these as concerns. 9. Previous Research 9.1. Planning Early planners such as STRIPS (Fikes71) did not address Goal Conflict Detection as a separate problem. Conflicts were detected by the resolution theorem prover, The theorem prover compares a small set of add or delete formulas, and a small set of formulas that described the present state and the desired state of the world. If an action deleted the precondi- tion of another action in the plan sequence, backtracking al- lowed the planner to determine another ordering of the plan steps. ABSTRIPS (Sacerdod74), modified STRIPS to avoid these interacting subgoal problems by solving goals in a hierarchical fashion. Conflicts in ABSTRIPS were also noticed by the theorem prover. However, since the most important parts of the plan were solved first, they occurred less often and fewer paths were explored. Thus, both these programs identified a plan failure as a failed path in the search tree. Therefore, no information about the nature of a failed path could easily be extracted and expressed to a user of the planning system. Sacerdoti's NOAH (Sacerdoti77) program separated the detection of conflicts from the rest of the planning process us- ing his Resolve-Conflicts critic. This critic detects one partic- ular kind of conflict, in which one action deletes the precondi- tion of another action. We refer to this type of conflict as a deleted precondition plan conflict. The critic resolves the conflict by committing to an ordering of steps in which the ac- tion which requires the precondition is executed first. The ordering of steps is usually possible since NOAH uses a least commitment strategy for plan step ordering. By separating the detection of goal conflicts from the rest of the planning pro- cess, NOAH needs to search fewer plan paths than earlier planners. In order to detect conflicts NOAH computes a TOME, a table of multiple effects, each time a new action is added to the plan. This table includes all preconditions which are as- serted or denied by more than one step in the current plan. Conflicts are recognized when a precondition for one step is denied in another step. In order to construct this table, NOAH must enter all the effects and preconditions for each of the steps in the plan every time a new step is added to the plan. NOAH'S separation of the Goal Conflict Detection Phase from the rest of the planning process was an important addi- tion to planning research. However, NOAH'S approach is problematic in a number of ways. First, it only detects conflicts that occur as a result of deleted preconditions. Other conflicts, such as conflicts between effects of a plan and other planner goals, cannot be detected using this method. Most of the examples in this paper are part of this category of conflict. If many planner goals were included in a TOME, as would be necessary in real world planning situations, this method would be computationally inefficient, Therefore, the same problems that were discussed earlier in regard to exhaustive search also apply to this method. A TOME is (1) computationally inefficient, (2) not cognitively valid, (3) unable to deal with default knowledge, and (4) assumes that all user goals are known, i.e. would have to evaluate every planner interest in a particular planning situation. Furthermore, information from a critic which is derived from a TOME is very difficult to express. The only thing that NOAH knows regarding a potential plan failure is that one step in a plan will delete the precondition of another step in the plan. A concern, on the other hand is very easy express to the user. Concerns connect the various objects that are effect- ed by a plan failure. In addition, as in any part of the KO- DIAK knowledge base, additional expression information can be attached to the concern itself. This difference between a concern and a TOME is another example of the advantage of knowledge-rich declarative representations over procedural representation of knowledge. 9.2. Expression As discussed earlier, work in intelligent user interfaces(Allen84, Appelt85, McDonald84) has primarily focused on decisions regarding what aspects of a plan should be expressed to the user. Expressing concerns about potential plan failures is a natural extension to these other user inter- faces. The texture of this work is very similar to work done earlier by the author. In earlier work on question answering in a text understanding system (Luria82ab), question-answering was divided into two separate processes. According to earlier work one question-answering process determined what was contained in the answer and how that information was ex- pressed to the user. The first of our two processes determined which part of a causal chain was relevant for a particular answer. The second process determined which part of that causal chain should be generated into a natural language response for the user. This resulted in one relatively simple process that found that correct response, and another more general expression process termed answer expression. In the present work, the process of expressing potential caveats in a plan was not divided into two new processes, In- stead, this process is divided into the preexisting planning component, and a more general expression mechanism. In so doing, we have improved the ability of the planning com- ponent to deal with potential plan failures. 226 10. References Allen, J. 1984. Recognizing Intentions From Natural Language Utterances. In Michael Brady (ed.) Computa- tional Models of Discourse Cambridge, Mass; MIT Press. Appelt, D. 1982. Planning Natural Utterances to Satisfy Mul- tiple Goals. SRI International AI Center Technical Note 259. Chin, D. N. 1987. "KNOME: Modeling What the User Knows in UC" to appear in User Modelling in Dialog Systems, Springer-Verlag series on Symbolic Computation. Ernst, G. and Newell, A. 1969. GPS: A Case Study in Gen- erality and Problem Solving. New York: Academic Press. Fikes, R. E., and Nilsson, N. J. STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, Vol. 2, No. 3-4, pp. 189-208. 1971. Grice, H. P. Logic and Conversation. In P. Cole (ed.) Syntax and Semantics, Vol. 3: Speech Acts, New York: Academ- ic Press, pp. 41-58. Luria, M. "Question Answering: Two Separate Processes" Proceedings of the 4th National Conference of the Cog- nitive Science Society, Ann Arbor, MI August, 1982. Luria, M. "Dividing up the Question Answering Process" Proceedings of the National Conference on Artificial In- telligence, Pittsburgh, PA. August, 1982. Luria, M. "Commonsense Planning in a Consultant System" Proceedings of 9th Conference of the IEEE on Systems, Man, and Cybernetics, Tuscon, AZ. November, 1985. Luria, M. "Concerns: How to Detect Plan Failures." Proceed- ings of the Third Annual Conference on Theoretical ls- sues in Conceptual Information Processing. Philadel- phia, PA. August, 1986. Luria, M. "Concerns: A Means of Identifying Potential Plan Failures." Proceedings of the Third IEEE Conference on Artificial Intelligence Applications. Orlando, Florida. February, 1987. Luria, M. "Goal Conflict Concerns" Proceedings of the Tenth International Joint Conference on Artificial Inteligence. Milan, Italy. August, 1987. McDonald, D. 1984. Natural Language Generation as a com- putational problem. In Michael Brady (ed.) Computa- tional Models of Discourse Cambridge, Mass; MIT Press. Martin, J., 1985. Knowledge Acquisition Through Natural Language Dialogue, Proceedings of the 2nd Conference on Artificial Intelligence Applications, Miami, Florida, 1985. Mayfield, J., 1986. When to Keep Thinking, Proceedings of the Third Annual Conference on Theoretical Issues in Conceptual Information Processing. Philadelphia, PA. 1986. Newell, A., and Simon, H. A. Human Problem Solving. Prentice-Hall, Englewood Cliffs, N. J. 1972. Sacerdoti, E., Planning in a Hierarchy of Abstraction Spaces, Artificial lnteUigence Vol. 5, pp. 115-135, 1974. Sacerdoti E. A Structure for Plans and Behavior Elsevier North-Holland, New York, N.Y. 1977. Wilensky, R. Planning and Understanding: A Computational Approach to Human Reasoning. Addison-Wesley, Read- ing, Mass., 1983. Wilensky, R., "KODIAK: A Knowledge Representation Language". Proceedings of the 6th National Conference of the Cognitive Science Society, Boulder, CO, June 1984. Wilensky, R., Arens, Y., and Chin, D. Talking to Unix in En- glish: An Overview of UC. Communications of the As- sociation for Computing Machinery, June, 1984. Wilensky, R., et. al., UC - A Progress Report. University of California, Berkeley, Electronic Research Laboratory Memorandum No. UCB/CSD 87/303. 1986. Sponsored by the Defense Advanced Research Projects Agency (DoD), Arpa Order No. 4871, monitored by Space and Naval Warfare Systems Command under Contract N00039-84-C-0089. 227
1987
31
INCORPORATING INHERITANCE AND FEATURE STRUCTURES INTO A LOGIC GRAMMAR FORMALISM Harry H. Porter, III Oregon Graduate Center 19600 N.W. Von Neumann Dr. Beaverton Oregon 97008-1999 ABSTRACT Hassan Ait-Kaci introduced the #/-term, an informational structure resembling feature- based functional structures but which also includes taxonomic inheritance (Ait-Kaci, 1984). We describe e-terms and how they have been incorporated into the Logic Grammar formal- ism. The result, which we call Inheritance Grammar, is a proper superset of DCG and includes many features of PATR-II. Its taxo- nomic reasoning facilitates semantic type-class reasoning during grammatical analysis. INTRODUCTION The Inheritance Grammar (IG) formalism is an extension of Hassan Ait-Kaci's work on #/- terms (Ait-Kaci, 1984; Ait-Kaci and Nasr, 1986). A e-term is an informational structure similar to both the feature structure of PATR-II (Shieber, 1985; Shieber, et al, 1986) and the first-order term of logic, e-terms are ordered by subsumption and form a lattice in which unification of #/-terms amounts to greatest lower bounds (GLB, [-']). In Inheritance Grammar, #/- terms are incorporated into a computational paradigm similar to the Definite Clause Gram- mar (DCG) formalism (Pereira and Warren, 1980). Unlike feature structures and first-order terms, the atomic symbols of #/-terms are ordered in an IS-A taxonomy, a distinction that is useful in performing semantic type-class rea- soning during grammatical analysis. We begin by discussing this ordering. THE IS-A RELATION AMONG FEATURE VALUES Like other grammar formalisms using feature-based functional structures, we will assume a fixed set of symbol8 called the signa- ture. These symbols are atomic values used to represent lexical, syntactic and semantic categories and other feature values. In many formalisms (e.g. DCG and PATR-II), equality is the only operation for symbols; in IG symbols are related in an IS-A hierarchy. These rela- tionships are indicated in the grammar using statements such as1: boy < masculineObject. girl < feminineObject. man < masculineObject. woman < feminineObJect. {boy, girl} < child. {man, woman} < adult. {child, adult} < human. The symbol < can be read as "is a" and the notation {a,,... ,an}<b is an abbreviation for al<b, • • • ,an<b. The grammar writer need not distinguish between instances and classes, or between syntactic and semantic categories when the hierarchy is specified. Such distinctions are only determined by how the symbols are used in the grammar. Note that this example ordering exhibits multiple inheritance: feminineOb- jeers are not necessarily humans and humans are not necessarily feminine0b- Jeers, yet a girl is both a human and a feminineObj ect. Computation of LUB (t_ J) and GLB (['7) in arbitrary partial orders is problematic. In IG, the grammar writer specifies an arbitrary ordering which the rule execution system automatically embeds in a lattice by the addi- tion of newly created symbols (Maier, 1980). Symbols may be thought of as standing for conceptual sets or semantic types and the IS-A relationship can be thought of as set I Symbols appearing in the grammar but not in the 228 inclusion. Finding the GLB-i.e. unification of symbols-then amounts to set intersection. For the partial order specified above, two new sym- bols are automatically added, representing semantic categories implied by the IS-A state- ments, i.e. human females and human males. The first new category (human females) can be thought of as the intersection of human and feminlneObJect or as the union of girl and woman 2, and similarly for human males. The signature resulting from the IS-A statements is shown in Figure 1. C-TERMS AS FEATURE STRUCTURES Much work in computational linguistics is focussed around the application of unification to an informational structure that maps attribute names (also called feature names, slot names, or labels) to values (Kay, 1984a; Kay, 1984b; Shieber, 1985; Shieber, et al, 1986). A value is either atomic or (recursively) another such map- ping. These mappings are called by various names: feature structures, functional structures, f-structures, and feature matrices. The feature structures of PATR-II are most easily under- stood by viewing them as directed, acyclic graphs (DAGs) whose arcs are annotated with feature labels and whose leaves are annotated with atomic feature values (Shieber, 1985). IS-A statements are taken to be unrelated. 2 Or anything in between. One is the most liberal in- terpretation, the other the most conservative. The signs- ture could be extended by adding both classes, and any number in between. IGs use C-terms, an informational struc- ture that is best described as a rooted, possibly cyclic, directed graph. Each node (both leaf and interior) is annotated with a symbol from the signature. Each arc of the graph is labelled with a feature label (an attribute). The set of feature labels is unordered and is distinct from the signature. The formal definition of C-terms, given in set theoretic terms, is complicated in several ways beyond the scope of this presentation-see the definition of well-formed types in (Ait-Kaci, 1984). We give several examples to give the flavor of C-terms. Feature structures are often represented using a bracketed matrix notation, in addition to the DAG notation. C-terms, on the other hand, are represented using a textual notation similar to that of first-order terms. The syntax of the textual representation is given by the fol- lowing extended BNF grammar 3. term ::= featureList ::= feature ::= symbol [ featureList ] [ featureList ( feature , feature , ... , feature ) label => term [ label ~ variable [ : term ] Our first example contains the symbols np, singular, and third. The label of 3 The vertical bar separates alternate constituents, brackets enclose optional constituents, and ellipses are used (loosely) to indicate repetition. The characters ( ) -> , and z are terminals. feminineObject human masculineObject adu i t humanF ema i e humanMa i e chi i d woman man gir I boy Figure 1. A signature. 229 the root node, np, is called the head symbol. This C-term contains two features, labelled by number and person. np ( number ~ singular, person ~ third) The next example includes a subterm at agreement:=>: (cat ~ np, agreement ~ (number ~ singular, person ~ third)) In this C-term the head symbol is missing, as is the head symbol of the subterm. When a sym- bol is missing, the most general symbol of the signature (T) is implied. In traditional first-order terms, a variable serves two purposes. First, as a wild card, it serves as a place holder which will match any term. Second, as a tag, one variable can con- strain several positions in the term to be filled by the same structure. In C-terms, the wild card function is filled by the maximal symbol of the signature (T) which will match any C-term during unification. Variables are used exclusively for the tagging function to indicate C-term eore/erence. By convention, variables always begin with an uppercase letter while symbols and labels begin with lowercase letters and digits. In the following ~b-term, representing The man want8 to dance with Mary, X is a variable used to identify the subject of wants with the subject of dance. sentence ( subject ~ X: man, predicate ~ wants, verbComp ~ clause ( subject ~ X, predicate ~ dance, object ~ mary )) If a variable X appears in a term tagging a subterm t, then all subterms tagged by other occurrences of X must be consistent with (i.e. unify with) t 4. If a variable appears without a subterm following it, the term consisting of sim- ply the top symbol (T) is assumed. The con- straint implied by variable coreference is not just equality of structure but equality of refer- ence. Further unifications that add information to one sub-structure will necessarily add it to the other. Thus, in this example, X constrains the terms appearing at the paths subject=> and verbComp~subject~ to be the same term. In the ~b-term representation of the sen- tence The man with the toupee sneezed, shown below, the np filling the subject role, X, has two attributes. One is a qualifier filled by a relativeClause whose subject is X itself. sentence ( subject ~ X: np ( head ~ man, qualifier ~ relativeClause subject ~ X, predicate ~ wear, object ~ toupee)), predicate ~ sneezed) As the graphical representation (in Figure 2) of this term clearly shows, this C-term is cyclic. UNIFICATION OF ~b-TERMS The unification of two ~b-terms is similar to the unification of two feature structures in PATR-II or two first-order terms in logic. Unification of two terms t I and t 2 proceeds as follows. First, the head symbols of tl and t2"are unified. That is, the GLB of the two symbols in the signature lattice becomes the head symbol of the result. Second, the subterms of t I and t, are unified. When t I and t 2 both contain the feature f, the corresponding subterms are unified and added as feature f of the result. If one term, say h, contains feature f and the other term does not, then the result will contain feature f with the value from h. This is the same result that would obtain if t2 contained feature f with value T. Finally, the subterm 4 Normally, the subterm at X will be written follow- ing the first occurrence of X and all other occurrences of X will not include subterms. 230 coreference constraints implied by the variables in t 1 and t 2 are respected. That is, the result is the least constrained ~b-term such that if two paths (addresses) in t 1 (or t2) are tagged by the same variable (i.e. they core/%r) then they will corefer in the result. For example, when the C-term (agreement @ X: (number@singular), subject => (agreement@X)) is unified with (subject@ (agreement@ (person@third))) the result is (agreement @ X: (number@singular, person@third) , subject @ (agreement@X)) INHERITANCE GRAMMARS An IG consists of several IS-A statements and several grammar rul¢~. A grammar rule is a definite clause which uses C-terms in place of the first-order literals used in first-order logic programming s. Much of the notation of Pro]og and DCGs is used. In particular, the :- sym- bol separates a rule head from the C-terms comprising the rule body. Analogously to Pro- log, list-notation (using [, I, and ]) can be used as a shorthand for C-terms representing lists and containing head and tail features. When the --> symbol is used instead of "-, the rule is treated as a context-free grammar rule and the interpreter automatically appends two additional arguments (start and end) to facilitate parsing. The final syntactic sugar allows feature labels to be elided; sequentially numbered numeric labels are automatically sup- plied. Our first simple Inheritance Grammar consists of the rules: sent --> noun (Num) ,verb (Num) . noun (plural) --> [cats] . verb (plural) --> [meow] . The sentence to be parsed is supplied as a goal 6 This is to be contrasted with LOGIN, in which ¢- Figure 2. Graphical representation of a C-term. 231 clause, as in: :- sent ([cats,meow] , []) . The interpreter first translates these clauses into the following equivalent IG clauses, expanding away the notational sugar, before execution begins. sent (start~Pl,end~P3) : - noun (l~Num, start~Pl, end~P2) , verb (l~Num, start~P2, end~P3) . noun (l~plural, start~list (head, cats, tail~L) , end~L) . verb (l~plural, start~list (head,meow, tail~L) , end~L) . :- sent (start~list ( head,cats, tail~list ( head,meow, tail~nil)) , end~nil ) . As this example indicates, every DCG is an Inheritance Grammar. However, since the argu- ments may be arbitrary C-terms, IG can also accomodate feature structure manipulation. TYPE-CLASS REASONING IN PARSING Several logic-based grammars have used semantic categorization of verb arguments to disambiguate word senses and fill case slots (e.g. Dahl, 1979; Dahl, 1981; McCord, 1980). The primary motivation for using !b-terms for gram- matical analysis is to facilitate such semantic type-class reasoning during the parsing stage. As an example, the DCG presented in (McCord, 1980) uses unification to do taxonomic reasoning. Two types unify iff one is a subtype of the other; the result is the most specific type. For example, if the first-order term smith:_ representing an untyped individual 6, is unified with the type expression X:person: student, representing the student subtype of person, the result is smith :person : student. terms replace first-order terms rather than predications. e Here the colon is used as a right-associative infix operator meaning subtype. While .this grammar achieves extensive coverage, we perceive two shortcomings to the approach. (1) The semantic hierarchy is some- what inflexible because it is distributed throughout the lexicon, rather than being main- tained separately. (2) Multiple Inheritance is not accommodated (although see McCord, 1985). In IG, the ¢-term student can act as a typed variable and unifies with the C-term smith (yielding smith) assuming the presence of IS-A statements such as: student < person. {smith, Jones, brown} < student. The taxonomy is specified separately-even with the potential of dynamic modification-and mul- tiple inheritance is accommodated naturally. OTHER GRAMMATICAL APPLICATIONS OF TAXONOMIC REASONING The taxonomic reasoning mechanism of IG has applications in lexical and syntactic categorization as well as in semantic type-class reasoning. As an illustration which uses C-term predications, consider the problem of writing a grammar that accepts a prepositional phrase or a relative clause after a noun phrase but only accepts a prepositional phrase after the verb phrase. So The flower under the tree wilted, The flower that was under the tree wilted, and John ate under the tree should be accepted but not *John ate that was under the tree. The taxon- omy 8peeifie~ that prepositionalPhrase and relativeClause are npModifiers but only a prepositionalPhrase is a vpMo- difier The following highly abbreviated IG shows one simple solution: {prepositionalPhrase, relativeClause} < npModifier. prepositionalPhrase < vpModifier. sent(...) --> rip(...), vp (...), vpModifier (...) . np(...) --> np(...), npModifier (...) . np(...) --> . . . vp(...) --> . . . prepositionalPhrase(...) --> . . • 232 relativeClause(...) --> ... IMPLEMENTATION We have implemented an IG development environment in Smalltalk on the Tektronix 4406. The IS-A statements are handled by an ordering package which dynamically performs the lattice extension and which allows interac- tive display of the ordering. Many of the tech- niques used in standard depth-first Prolog exe- cution have been carried over to IG execution. To speed grammar execution, our system precompiles the grammar rules. To speed gram- mar development, incremental compilation allows individual rules to be compiled when modified. We are currently developing a large grammar using this environment. As in Prolog, top-down evaluation is not complete. Earley Deduction (Pereira and War- ren, 1980; Porter, 1986), a sound and complete evaluation strategy for Logic programs, frees the writer of DCGs from the worry of infinite left-recursion. Earley Deduction is essentially a generalized form of chart parsing (Kaplan, 1973; Winograd, 1983), applicable to DCGs. We are investigating the application of alternative exe- cution strategies, such as Earley Deduction and Extension Tables (Dietrich and Warren, 1986) to the execution of IGs. ACKNOWLEDGEMENTS Valuable interactions with the following people are gratefully acknowledged: Hassan A.it-Kaci, David Maier, David S. Warren, Fernando Pereira, and Lauri Karttunen. REFERENCES AJt-Kaci, Hassan. 1984. A Lattice Theoretic Approach to Computation Based on a Calculus of Partially Ordered Type Structures, Ph.D. Dissertation, University of Pennsylvannia, Philadelphia, PA. A.it~-Kaci, Hassan and Nasr, Roger. 1986. LOGIN: A Logic Programming Language with Built-in Inheritance, Journal of Logic Program, ruing, 3(3):185-216. Dahl, Veronica. 1979. Logical Design of Deductive NL Consultable Data Bases, Proc. 5th Intl. Conf. on Very Large Data Bascn, Rio de Janeiro. Dahl, Veronica. 1981. Translating Span- ish into Logic through Logic, Am. Journal of Comp. Linguistics, 7(3):149-164. Dietrich, Susan Wagner and Warren, David S. 1986. Extension Tables: Memo Rela- tions in Logic Programming, Technical Report 86/18, C.S. Dept., SUNY, Stony Brook, New York. Kaplan, Ronald. 1973. A General Syn- tactic Processor, in: Randall Rustin, Ed., Natural Language ProcesMng, A_lgorithmics Press, New York, NY. Kay, Martin. 1984a. Functional Unification Grammar: A "Formalism for Machine Translation, Proc. 2Znd Ann. Meeting of the Assoc. for Computational Linguistics (COLING), Stanford University, Palo Alto, CA. Kay, Martin. 1984b. Unification in Grammar, Natural Lang. Understanding and Logic Programming Conf. Proceedings, IRISA- INRIA, Rennes, France. Maier, David. 1980. DAGs as Lattices: Extended Abstract, Unpublished manuscript. MeCord, Michael C. 1980. Using Slots and Modifiers in Logic Grammars for Natural Language, Artificial Intelligence, 18(3):327-368. McCord, Michael C. 1985. Modular Logic Grammars, Proc. of the eSrd ACL Conference, Chicago, IL. Pereira, F.C.N. and Warren, D.H.D. 1980. Definite Clause Grammars for Language Analysis - A Survey of the Formalism and a Comparison with Augmented Transition Net- works, Artificial Intelligence, 13:231-278. Pereira, F.C.N. and Warren, D.H.D. 1983. Parsing as Deduction, elst Annual Meeting of the Assoc. for Computational Linguistics, Bos- ton, MA. Porter, Harry H. 1986. Earley Deduction, Technical Report CS/E-86-002, Oregon Gradu- ate Center, Beaverton, OR. Shieber, Stuart M. 1985. An Introduction to Unification-Based Approaches to Grammar, Tutorial Session Notes, £3rd Annual Meeting of the A~oc. for Computational Linguistics, Chi- cago, IL. 233 Shieber, S.M., Pereira, F.C.N., Karttunen, L. and Kay, M. 1986. A Compilation of Papers on Unification-Based Grammar Formalisms, Parts I and II, Center for the Study of Language and Information, Stanford. Winograd, Terry. 1983. Language aa a Cognitive Process, Vol. Z: Syntax, Addison- Wesley, Reading, MA. 234
1987
32
A Unification Method for Disjunctive Feature Descriptions Robert T. Kasper USC/Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 and Electrical Engineering and Computer Science Department University of Michigan Abstract Although disjunction has been used in several unification- based grammar formalisms, existing methods of unification have been unsatisfactory for descriptions containing large quantities of disjunction, because they require exponential time. This paper describes a method of unification by succes- sive approximation, resulting in better average performance. 1 Introduction Disjunction has been used in severM unlfication-based gram- mar formalisms to represent alternative structures in descrip- tions of constituents. Disjunction is an essential component of grammatical descriptions in Kay's Functional Unification Grammar [6], and it has been proposed by Karttunen as a Linguistically motivated extension to PATR-II [2]. In previous work two methods have been used to handle disjunctive descriptions in parsing and other computational applications. The first method requires expanding descriptions to dis- ]unctl've normal form (DNF) so that the entire description can be interpreted as a set of structures, each of which con- tains no disjunction. This method is exemplified by Definite Clause Grammar [8], which eliminates disjunctive terms by expanding each rule containing disjunction into alternative rules. It is also the method used by Kay [7] in parsing FUG. This method works reasonably well for small grammars, but it is clearly unsatisfactory for descriptions containing more than a small number of disjunctions, because the DNF ex- pansion requires an amount of space which is exponential in the number of disjunctions. The second method, developed by Karttunen [2], uses con- straints on dlsjuncts which must be checked whenever a die- junct is modified. Karttunen's method is only applicable to value disjunctions (i.e. those disjunctions used to specify the value of a single feature), and it becomes complicated and in- efficient when disjuncts contain non-local dependencies (i.e. values specified by path expressions denoting another fea- ture). In previous research [4,5] we have shown how descriptions of feature structures can be represented by a certain type of logical formula, and that the consistency problem for dis- junctive descriptions is NP-complete. This result indicates, according to the widely accepted mathematical assumption that P ~ NP, that any complete unification algorithm for disjunctive descriptions wiU require exponential time in the worst case. However, this result does not preclude algorithms with better average performance, such as the method de- scribed in the remainder of this paper. This method over- comes the shortcomings of previously existing methods, and has the following desirable properties: 1. It appUes to descriptions containing general disjunction and non-local path expressions; 2. It delays expansion to DNF; 3. It can take advantage of fast unification algorithms for non-disjunctive directed graph structures. 2 Data Structures The most common unification methods for non-disjunctive feature structures use a directed graph (DG) representation, in which arcs are labeled by names of features, and nodes correspond to values of features. For an introduction to these methods, the reader is referred to Shieber's survey [11 I. In the remainder of this section we will define a data structure for disjunctive descriptions, using DG structures as a basic component. In the following exposition, we will carefully observe the distinction between feature structures and their descriptions, as explained in [4]. Feature structures will be represented by DGs, and descriptions of feature structures will be repre- sented by logical formulas of the type described in [4 I. The 235 NIL TOP ~< Px >,...,< P,~ >! ~^¢ ,/, V ¢, denoting no information; denoting inco~istent information; where a E A, to describe atomic values; where l E L and ~ E FDL, to describe structures in which the feature labeled by I has a value described by ~; where each pC E L °, to describe an equivalence class of paths sharing a common value in a feature structure; where @, ¢ E FDL; where @, ¢ E FDL. Figure I: Syntax of FDL Formulas. syntax for formulas of this feature description logic (hereafter called FDL) is given in Figure 1. I Note, in particular, that disjunction is used in descriptions of feature structures, but not in the structures themselves. As we have shown (see [9]) that there is a unique minimal satisfying DG structure for any nondisjunctive FDL formula, we can represent the parts of a formula which do not contain any disjunction by DGs. DGs are a more compact way of representing the same information that is contained in a FDL formula, provided the formula contains no disjunction. Let us define an unconditional conjunct to be a conjunct of a formula which contains no occurrences of disjunction. After path expansion any formula can be put into the form: uco~j ^ disj~ A... A disy,,, where uconj contains no occurrences of disjunction, and each disj¢, for 1 ~ i ~ m, is a disjunction of two or more alter- natives. The ,~conj part of the formula is formed by using the commutative law to bring all unconditional conjuncts of the formula together at the front. Of course, there may be no unconditional conjuncts in a formula, in which case ucoaj would be the formula NIL. Each disjunct may be any type of formula, so disjuncts can also be put into a similar form, with aLl unconditional con- juncts grouped together before all disjunctive components. Thus the disjunctions of a formula can be put into the form (~conj~ ^disA ~ ^...^disA,) v... v (uconj,, ^disj,, ~ ^...^ dlsj,, ). The embedding of conjuncts within disjuncts is preserved, but the order of conjuncts may be changed. The unconditional conjuncts of a formula contain informa- tion that is more definite than the information contained in disjunctions. Thus a formula can be regarded as having a definite part, containing only unconditional conjuncts, and an indefinite part, containing a set of disjunctions. The def- inite part contains no disjunction , and therefore it may be represented by a DG structure. To encode these parts of a formula, let us define a feature-description as a type of data structure, having two components: ILet A and L be sets of symbols which are used to denote atomic values and feature labels, respectively. Figure 2: AND/OR graph representation of a feature description. defl,n|te: a DG structure; indefinite: a SET of disjunctions, where each disjunction is a SET of feature.descriptlon.s. It is poesibh to convert any FDL formula into a feature- description structure by a simple automatic procedure, a.s described in [5]. This conversion does not add or subtract any information from a formula, nor increase its size in any sig- nificant way. It simply identifies components of the formula which may be converted into a more el~cient representation as DG structures. A feature-descriptlon is conceptually equivalent to a spe- cial kind of AND/OR graph, in which the terminal nodes are represented by DG structures. For example, an AND/OR graph equivalent to the formula, 4,0 ^ (¢1 v ,/,2) ^ (~,3 v ¢4 v (Ca ^ (¢o v ¢7))) is shown in Figure 2. In the AND/OR graph representa- tlon, each AND-node represents a feature-description. The first outgoing arc from an AND-node represents the definite component of a feature-description, and the remaining outgo- Lug arcs represent the indefinite component, Each OR-node represents a disjunction. 236 Function I/N]FY-DESC (f, g) Returns feature.description: where f and g are feature-descriptions. I. Unify definite components. Let new-def = UNIFY-DGS (f.definite, g.definite). If new-def = TOP, then return (failure). Let desc = a feature-description with: desc.definite = new-def, desc.indefinite = f.indefinite td g.indefinite. If desc.indefinite = $, Then return (desc); Else begin; 2. Check compatibility of indefinite components with new-def. Let new-desc = CHECK-INDEF (desc, new-def). If new-desc = failure, then return (failure); 3. Complete ezhat~tiv¢ consistency checking, if necessary. Else if new-desc.indefinite = $ OR if complete checking is not required, Then return (new-desc); Else begin; Let n = 1. Repeat while n < cardinallty of new-desc.indefinite: new-desc := NWISE-CONSISTENCY (new-desc, n). n:=n+l. return (new-desc). end. end. Figure 3: Unification algorithm for feature-descriptions. 3 The Algorithm: Unification by Successive Approximation In this section we will give a complete algorithm for unify- ing two feature.descriptions, where one or both may contain disjunction. This algorithm is designed so that it can be used as a relatively efficient approximation method, with an op- tional step to perform complete consistency checking when necessary. Given two feature-descriptions, the strategy of the unifi- cation algorithm is to unify the definite components of the descriptions first, and examine the compatibility of indefi- nite components later. Disjuncts are eliminated from the description when they are inconsistent with definite informa- tion. This strategy avoids exploring dlsjuncts more than once when they are inconsistent with definite information. The exact algorithm is described in Figure 3. It has three major steps. In the first step, the definite components of the two de- scriptions are unified together, producing a DG structure, new-def, which represents the definite information of the re- sult. This step can be performed by existing unification al- gorithms for DGs. In the second step, the indefinite components of both de- scriptions are checked for compatibility with new-def, using the function CHECK-INDEF, which is defined in Figure 4. CHECK-IN]DEF uses the function CHECK-DIS J, defined in Figure 5, to check the compatibility of each disjunction with the DG structure given by the parameter con& The compatibility of two DGs can be checked by almost the same procedure as unification, but the two structures being checked are not actually merged as they are in unification. In the third major step, if any disjunctions remain, and it is necessary to do so, disjuncts of different disjunctions are considered in groups, to check whether they are compatible together. This step is performed by the function NWISE- CONSISTENCY, defined in Figure 6. When the parameter r~ to NWISE,-CONSISTENCY has the value 1, then one disjunct is checked for compatibility with all other disjunctions of the description in a pairwise manner. The pairwise manner of checking compatibility can be generalized to groups of any size by increasing the value of the parameter n. While this third step of the algorithm is necessary in or- der to insure consistency of disjunctive descriptions, it is not necessary to use it every time a description is built during a parse. In practice, we find that the performance of the algo- rithm can be tuned by using this step only at strategic points during a parse, since it is the most inefficient step of the al- 237 Function CHECK-INDEF (desc, cond) Returns feature-description: where deac is a feature-description, and cond is a DG. Let indef = desc.indeflnite (a set of disjunctions). Let new-def = desc.deflnite (a DG). Let unchecked-parts ~ true. While unchecked-parts, begin; unchecked-parts := false. Let new-indef = ~. For each disjunetiort in indef." Let compatible-disjuncts = CHECK-DISJ (disjunction, cond). If cardinality of compatible-disjuncts is: 0 : Return (failure); 1 : Let disjunct ---- single element of compatible-disjuncts. new-def :--- UNIFY-DGS (new-def, disjunct.deflnite). newoindef := new-indef tJ disjunct.indeflnite. unchecked-parts := true; otherwise : new-indef :ffi newoindef U {compatible-disjuncta}. Prepare to check remaining disjunctions for compatibility with new-def. cond := new-def. indef :~ new-indef. end (while loop). Let newodesc ~= make feature-description with: new-desc.deflnite ---- new-def, new.desc.indeflnite ---- new-indef. Return (new-desc). Figure 4: Algorithm to check compatibility of indefinite parts of feature-descriptions with respect to a condition DG. Function CHECK-DISJ (disj, cord) Return8 disjunction: where disj is a disjunction of feature-descriptions, and cond is a DG. Let new-disj = 0 (a set of feature-descriptions). For each disjunct in disj: If DGS-COMPATIBLE? (cond, disjunct.definite), Then if disjunct.indeflnite = $, Then new-disj := new-disj t9 {disjunct}; Else begin; Let new-disjunct : CHECK-INDEF (disjunct, cond). If new-disjunct ~ failure, then begin; new-disj := new-disj t9 {new-dlsjunct}. end. end. Return (new-disj). Figure 5: Algorithm to check compatibility of disjunctions with respect to a condition DG. 238 Funetlon NWISE-CONSISTENCY (desc, n) Returns feature-description: where desc is a feature-description. If number of disjunctions in desc.indefinite _< n, Then Return (desc). Let def = desc.definite. Let indef = desc.indefinite. Let new-indef = ~. While disiunctions remain in indef: Let disiunction ---- remove one disjunction from indef. Let new-disj = ~. For each dlsjuTtct in disjunction: Let disjunct-def ---- UNIFY-DGS (def, disiunct.definite). Let disjunct-indef ---- disjunet.indefinite U indef U new-indef. Let hyp-desc = make feature-description with: hyp-desc.definite = disjunct-def, hyp-desc.indefinite ---- disiunet-indef. If n = 1, Then let new-desc = CHECK-INDEF (hyp-desc, disjunct-def). Else let new-desc = NWISE-CONSISTENCY (hyp-desc, n-l). If new-desc ~ failure, Then new-disj := new-disj tJ (new-desc}. If cardinality of new-disj is: O: Return (failure); 1: Let new-desc = single element of new-disj. def := new-desc.definite. indef := new-dese.indefinite. new-indef := ¢; otherwise: (keep this disjunction in result) new-indef := new-indef U {new-disj}. Let result-desc = make feature-description with: result-desc.definite = def, result-desc.indefinite = new-indef. Return (result-desc). Figure 6: Algorithm to check compatibility of disjunctions of a description by checking groups of n disjunctions. gorithm. In our application, using the Earley chart parsing method, it has proved best to use NWISE-CONSISTENCY only when building descriptions for complete edges, but not when building descriptions for active edges. Note that two feature-descriptions do not become perma- nently linked when they are unified, unlike unification for DG stuctures. The result of unifying two descriptions is a new description, which is satisfied by the intersection of the sets of structures that satisfy the two given descriptions. The new descriptlon contains all the information that is contained in either of the given descriptions, subtracting any disjuncts which are no longer compatible. 4 An example In order to illustrate the effect of each step of the algo- rithm, let us consider an example of unifying the descrip- tion of a known constituent with the description of a por- tion of a grammar. This exemplifies the predominant type of structure building operation needed in a parsing program for Functional Unification Grammar. The example given here is deliberately simple, in order to illustrate how the algorithm works with a minimum amount of detail. It is not intended as an example of a linguistically motivated grammar. Let us trace what happens when the two descriptions of Figure 7 are given as inputs to the function UNIFY-DESC. Figure 8 shows the feature-description which results after step 1 of the algorithm. The definite components of the two descriptions have been unified, and their indefinite compo- nents have been conjoined together. In step 2 of the algorithm each of the disjuncts of DESC.INDEFINITE is checked for compatibility with DESC.DEFINITE, using the function CHECK-IN'DEF. In this case, all disjuncts are compatible with the definite infor- mation, except for one; the disjunct of the third disjunction which contains the feature Number : Sing. This disjunct is eliminated, and the only remaining disjunct in the disjunc- tion (i.e., the disjunct containing Number : PI) is unified with DESC.DEFINITE. The result after this step is shown in Figure 9. The four disjuncts that remain are numbered for convenience. In step 3, NWISE-CONSISTENCY is used with 1 as the value of the parameter n. A new description is hypothesized by unifying disjunct (1) with the definite component of the description (i.e., NEW-DESC.DEFINITE). Then disjuncts (3) and (4) are checked for compatibility with this hypothe- sized structure: (3) is not compatible, because the values of the Transitivity features do not unify. Disjunct (4) is also incompatible, because it has Goal : Person : 3, and the hy- 239 GRAMMAR: DEFINITE = [ Rank : Clause ] Sub] : Caes : Nora INDEFINITE = ( [ Yo4ca : Paa~dus Transitivity : Trana ~< Sub] >, < Goal >] Traneitlvity : Intran$ Actor : Person : 3 Number : Sing Sub] : Number : Sing V % V 'I Vo~cs : Actiu,, | ~< Sub] >, < Actor >! J Goal : Pereon : 3 Number : Pl ] S~] : Number : Pl SUBJECT CONSTITUENT: Lez : y'all ] DEFINITE = Sub] : Person : 2 Number : Pl INDEFINITE = NIL Figure 7: Two descriptions to be unified. pothesized description has ~< Sub] >, < Goal >l, along with Sub] : Person : 2. Therefore, since there is no compatible dlsjunct among (3) and (4), the hypothesis that (1) is com- patible with the rest of the description has been shown to be invalid, and (1) can be eliminated. It follows that disjunct (2) should be unified with the definite part of the descrip- tion. Now disjuncts (3) and (4) are checked for compatibility with the definite component of the new description: (3) is no longer compatible, but (4) is compatible. Therefore, (3) ls eliminated, and (4) is unified with the definite information. No disjunctions remain in the result, as shown in Figure 10. 5 Complexity of the Algorithm Referring to Figure 3, note that the function LrNIF¥-DESC may terminate after any of the three major steps. After each step it may detect inconsistency between the two descriptions and terminate, returning failure, or it may terminate because no disjunctions remain in the descrlption. Therefore, it is useful to examine the complexity of each of the three steps independently. Let n represent the total number of symbols in the com- bined description f ^ g, and d represent the total number of disjuncts (in both top-level and embedded disjunctions) contained in f A g. Step I. This step performs the unification of two DG struc- tures. Ait-Kaci [11 has shown how this operation can be per- formed in almost linear time by the UNION/FIND algorithm. Its time complexity has an upper bound of O(n log n). Since an unknown amount of a description may be contained in the definite component, this step of the algorithm also requires O(n log n) time. Slop ~. For this step we examine the complexity of the function CHECK-INDEF. There are two nested loops in CHECK-INDEF, each of which may be executed at most once for each disjunct in the description. The inner loop checks the compatibility of two DG structures, which requires no more time than unification. Thus, in the worst case, CHECK- INDEF requires O(d2n log n) time. Step 8. NWISE-CONSISTENCY requires at most 0(2 ~/~) time. In this step, NWISE-CONSISTENCY is called at most (d/2) - 1 times. Therefore, the overall complexity of step 3 0(2"/2). Discussion. While the worst case complexity of the entire algorithm i, 0(2~), an exponential, it is significant that it often terminates before step 3, even when a large number of dlsjunctlons are present in one of the descriptions. Thus, in many practical cases the actual cost of the algorithm is bounded by a polynomial that is at most d2n log n. Since must be less than n, this complexity function is almost cubic. Even when step 3 must be used, the number of remaining disjunctions is often much fewer than d/2, so the exponent i, usually a small number. The algorithm performs well in most cases, because the three steps are ordered in increasing complexity, and the number of disjunctions can only decrease during unification. 6 Implementation The algorithm presented in the previous sections has been im- plemented and tested as part of a general parsing method for Systemic Functional Grammar, which is described in 13]. The algorithm was integrated with the structure building module of the PATR-II system [10], written in the Zetalisp program- ming language. While the feature-description corresponding to a grammar may have hundreds of disjunctions, the descriptions that re- sult from parsing a sentence usually have only a small number of disjunctions, if any at all. Most disjunctions in a systemic grammar represent possible alternative values that some par- ticular feature may have (along with the grammatical conse- quences entailed by choosing particular values for the fea- ture). In the analysis of a particular sentence most features have a unique value, and some features are not present at all. When disjunction remains in the description of a sentence after parsing, it usually represents ambiguity or an under- specified part of the grammar. With this implementation of the algorithm, sentences of up to I0 words have been parsed correctly, using a grammar which contains over 300 disjunctions. The time required for most sentences is in the range of 10 to 300 seconds, running on lisp machine hardware. The fact that sentences can be parsed at all with a gram- mar containing this many disjunctions indicates that the al- gorithm is performing much better than its theoretical worst case time of O(2d). 2 The timings, shown in Table 1, obtained from the experimental parser for systemic grammar also in- dicate that a dramatic increase in the number of disjunctions in the grammar does not result in an exponential increase in parse time. Gos is a grammar containing 98 disjunctions, 2Consider, 2300 ~ 2 s°, and 2 s° is taken to be a rough estimate of the number of particles in the universe. 240 DESC.DEFINITE = Rank : Clause Case : Nora Lee : y'all Sub] : Person : 2 Number : Pl DESC.INDEFINI.TE = Voice : Passive Transitivity : Trans [< Sub] >, < Goal >] Transitivity : Intrans Actor : Person : 3 Number : Sing Sub] : Number : Sing vE 1) [< Sub] >, < Actor >] [Transitivity:Trans t) V Goal : Person : 3 [ Number:Pl ] ¢ Sub] : Number : Pl Figure 8: UNIFY-DESC: After step 1 (UNIFY-DGS). NEW-DESC.DEFINITE = Rank : Clause Case : Nora Lee : y' all Sub]: Person:2 Number : PI Number : PI NEW-DESC.INDEFINITE = Voice : Passive (1) Transitivity : Trans [< Sub] >, < Goal >] Transitivity : Intrans (3) Actor : Person : 3 Voice : Active ] ) v(2) ~<Suby>,<Actor>] V (4) Goal : Person : 3 Figure 9: UNIFY-DESC: After step 2 (CHECK-INDEF). Rank : Clause Case : Nora Lee : y' all Sub] : Person : 2 NEW-DESC.DEFINITE = Number : Pl Number : Pl Voice : Active [< Subj >, < Actor >] Transitivity : Trans Goal : Person : 3 NEW-DESC.INDEFINITE = NIL Figure 10: UNIFY-DESC: After step 3 (NWISE-CONSISTENCY). 241 Sentence Gos G44o Nigel has been speaking English. 22.9 144".3 Nigel has been speaking English to me. 28.6 203.5 Table i: Timings obtained from a systemic parser. and G,,o is a grammar containing 440 disjunctions. The total time used to parse each sentence is given in seconds. 7 Conclusions The unification method presented here represents a general solution to a seemingly intractable problem. This method has been used successfully in an experimental parser for a gram- mar containing several hundred disjunctions in its descrip- tion. Therefore, we expect that it can be used as the basis for language processing systems requiring large grammatical descriptions that contain disjunctive information, and refined as necessary and appropriate for specific applications. While the range of speed achieved by a straightfQrward implementation of this algorithm is acceptable for grammar testing, even greater efficiency would be desirable (and neces- sary for applications demanding fast real-time performance). Therefore, we suggest two types of refinement to this algo- rithm as topics for future research: using heuristics to deter- mine an opportune ordering of the dlsjuncts within a descrip- tion, and using parallel hardware to implement the compat- ibility tests for different disjunctions. Acknowledgements I would like to thank Bill Rounds, my advisor during gradu- ate studies at the University of Michigan, for hie helpful crit- icism of earlier versions of the algorithm which is presented here. I would also like to thank Bill Mann for suggestions during its implementation at USC/ISI, and Stuart Shieber for providing help in the use of the PATR-II system. This research was sponsored in part by the United States Air Force Office of Scientific Research contracts FQ8671-84- 01007 and F49620-87-C.-0005, and in part by the United States Defense Advanced Research Projects Agency under contract MDA903-81-C-0335; the opinions expressed here are solely those of the author. [3] Kasper, R. Systemic Grammar and Functional Unifica- tion Grammar. In J. Benson and W. Greaves, editors, Systemic Functional Perspectives on Discourse: Selected Papers from the 12 t~ International Systemics Wor~hop, Norwood, New Jersey: Ablex (forthcoming). [4] Keeper, R. and W. Rounds. A Logical Semantics for Feature Structures. In Proceedings of the 24 eh Annual Meeting of the Association for Computational Linguistics, Columbia University, New York, N'Y, June 10-13, 1986. [5] Keeper, R. Feature Structures: A Logical Theory ~ith Ap- plication to Language Analysis. PhD dissertation, Uni- versity of Michigan, 1987. [6] Kay, M. Functional Grammar. In Proceedings of the Fifth Annual Meeting of the Berkeley Linguistics Soci- ety, Berkeley Linguistics Society, Berkeley, California, February 17-19, 1979. [7] Kay, M. Parsing in Functional Unification Grammar. In D. Dowty, L. Karttunen, and A. Zwicky, editors, Natu- ral Language Parsing. Cambridge University Press, Cam- bridge, England, 1985. [8] Perelra, F. C. N. and D. H. D. Warren. Definite clause grammars for language analysis - a survey of the formal- ism and a comparison with augmented transition net- works. Artificial Intelligence, 13:231-278, 1980. [9] Rounds, W. C. and R. Keeper. A Complete Logical Calculus for Record Structures Representing Linguistic Information. Symposium on Logic in Computer Science. IEEE Computer Society, June 16-18, 1986. [101 Shieber, S. M. The design of a computer language for linguistic information. In Proceedings of the Tenth Inter- national Conference on Computational Linguistics: COL- ING 84, Stanford University, Stanford, California, July 2-7, 1984. [11] Shieber, S. M. An Introduction to Unification-based Ap- proaches to Grammar. Chicago: University of Chicago Press, CSLI Lecture Notes Series, 1986. References [1] Ait-Kaci, H. A New Model of Computation Based on a Calculus of Type Subsumption. PhD thesis, University of Pennsylvania, 1984. [2l Karttunen, L. Features and Values. In Proceedings of the Tenth International Conference on Computational Lin- guistics: COLING 8~, Stanford University, Stanford, California, July 2-7, 1984. 242
1987
33
REVISED GENERALIZED PHRASE STRUCTURE GRAMMAR Eric Sven Rlstad 1 M.I.T. Artificial Intelligence Lab 545 Technology Square, 805 Cambridge, MA 02139 Thinking Machines Corporation 245 First Street Cambridge, MA 02142 ABSTRACT In this paper, I revise generalized phrase structure grammar (GPSG) linguistic theory so that it is more tractable and linguis- tically constrained. Revised GPSG is also easier to understand, use, and implement. I provide an account of topicalization, ex- plicative pronouns, and parasitic gaps in the revised system and conclude with suggestions for efficient parser design. 1 Introduction and Motivation A linguistic theory specifies a computational process that assigns structural descriptions to utterances. This process requires cer- tain computational resources, such as time or space. In a descrip- tively adequate linguistic theory, the computational resources available to the theory match those used by the ideal speaker- hearer. The goal of this paper is to revise generalized phrase structure grammar (GPSG) so that its computational power cor- responds to the ability of the speaker-hearer. The bulk of this paper is devoted to identifying what com- putational resources are used by GPSG theory, and deciding whether they are linguistically necessary. GPSG contains five formal devices, each of which provides the theory with the re- sources to model some linguistic phenomenon or ability. I iden- tify those aspects of each device that cause intractability and then restrict the computational power of each device to more closely match the (inherent) complexity of the phenomenon or ability it models. The remainder of the paper presents the new formal system and exercises it in the domain of topicalization, explica- tive pronouns, and parasitic gaps. I conclude with suggestions for efficient parser design and future research. In my opinion, the primary value of this work lies in the re- sult (revised GPSG, or RGPSG) as well as in the methodology of using complexity analysis to improve linguistic theories. The methodology explicates how a tool of modern computer science can help us understand and improve theories of linguistic compe- tence. More than that, complexity analysis forms the foundation of informed parser design. I feel RGPSG is of value both to lin- guists and computational linguists because it is more tractable and easier to understand, use, and implement. It can be effi- ciently implemented and appears to have better empirical cover- age than its GPSG ancestor. tThe author is eupported by a graduate fellowship from the IBM Corpora- tion. This research was supported in part by Thinking Machines Corporation and by NSF Grant DCR-85552543, under a Presidential Young Investigator Award to Profeuor Robert C. Berwick. I wish to thank Ed Barton for stylistic improvements and helpful discussion; Robert Berwick for support, critickm, and suggesting I pursue thk research; and Geoff Pullum for his patient help with GPSG theory. 2 Eliminating Intractability in GPSG Ristad (1986a) examines the computational complexity of two components of the GPSG formal system (metarules and the fea- ture system) and shows how each of these systems can lead to computational intractability. Rlstad also proves that the uni- versal recognition problem for GPSGs is EXP-POLY hard, and intractable. 2 In another words, the fastest recognition algorithm for GPSGs can take more than exponential time. These results may appear surprising, given GPSG's weak context-fres generative power. They also raise some important computational and linguistic questions: why GPSG-Recognition is so difficult, what aspects of the GPSG formalisms cause in- tractability, and whether they are linguistically necessary. I be- gin with an outline of the GPSG formal system, as presented in Gazdar, Klein, Pullum, and Sag (1985), GKPS hereafter. Sub- sequently, I identify and remove the excess computational power provided by each formal device. 2.1 Overview of GPSG Formalisms From the perspective of classic formal language theory, a GPSG may be thought of as a grammar for generating a context-free grammar. The generation process begins with immediate dom- inance (ID) rules, which are context-free productions with un- ordered right-hand sides. An important feature of ID rules is that nonterminals in the rules are not atomic symbols (for example, NP). Rather, GPSG nonterminals are sets of [.feature, feature-value] pairs. For example, IN +] is a [feature, feature-value] pair, and the set { IN ÷], IV -], [BAR 2] } is the GPSG representation of a noun phrase. Next, metarules apply to the ID rules, resulting in an enlarged set of ID rules. Metarules have fixed input and output patterns containing a distinguished multiset variable W in addition to constants. If an ID rule matches the input pattern under some specialization of the variable W, then the metarule generates an ID rule corresponding to the metarule's output pat- tern under the same specialization of W. For example, the passive metarule VP ~ W, NP • ~. (1) VPIPAs] ---* W, (PPIby]) says that "for every ID rule in the grammar which permits a VP to dominate an NP and some other material, there is also a rule 2The universal recognition problem most accurately reflectg the difficulty of processing a grammatical formalism because it incorporates the gr-4m- mar in the problem statement, as explained in Barton, Berwick, and Ristad (x987). 243 in the grammar which permits the passive category VP [PAS] to dominate just the other material from the original rule, together (optionally) with a PP[by] ~ (GKPS:59). In Ristad (1986a), the finite closure problem is used to determine the cost of metarule application. Principles of universal feature instantiation (UFI) apply to the resulting enlarged set of ID rules, defining a set of phrase structure trees of depth one (local trees). One principle of UFI is the head feature convention, which ensures that phrases are projected from lexical heads. Informally, the head feature convention is GPSG's ~-theory. Ristad (1986a) uses the eatego~j mem~ersA~p problem to determine, in part, the cost of mapping I'D rules to local trees. Finally, linear precedence statements are applied to the inst~ntiated local trees. LP statements order the unordered daughters in the instantiated local trees. The ulti- mate result, therefore, is a set of ordered local trees, and these are equivalent to the context-free productions in a context-free grammar. The resulting context-fres grammar derives the lan- guage of the GPSG. The process of assigning structural descriptions to utterances consists of two steps in GPSG: the projection of ID rules to local trees and the derivation of utterances from nonterminals, using the local trees. Accordingly, formal devices may supply resources to either process. 2.2 Theory of Syntactic Features In current GPSG theory, syntactic categories (nonterminals) en- code linguistic relations as feature-value pairs. If a relation is true of two categories in a phrase structure tree, then the rela- tion will be encoded in every category on the unique path be- tween the two categories. The primary computational resource provided by the theory of syntactic features is polynomial space, primarily due to the large number of possible syntactic categories arising from finite feature closure. Ristad (1986a) observes that finite feature closure admits a surprisingly large number of pos- sible categories: 9(36"bT) where a is the number of atomic-valued features and b the number of category-valued features. In fact, there are more that 107:~ categories in the GKPS system. Fortunately, the full power of embedded categories does not appear to be linguistically necessary because no category-valued feature need ever contain another, s In GPSG, there are three category-valued features: SLASH, which marks the path between a gap and its filler with the category of the filler; AGR, which marks the path between an argument and the functor that syn- tactically agrees with it (between the subject and matrix verb, for example); and WH, which marks the path between a ~#h-word and the minimal clause that contains it with the morphological type of the ~h-word. AGR will never contain SLASH because a functor (verb or predicate) will never select a gap or a constituent con- taining a gap as it's argument. Conversely, SLASH will never be required to contain AGR because such a category corresponds to %he following imaginary (and rather weird) case: Suppose we found a language in which finite verb phrases could be fronted over an unbounded domain provided that they were in the agree- ment form associated with third-person-singular NP controllers" (PuUum, personal communication). Similarly, because the value of ~ is the category of a wh- noun phrase, and because ~#~- nom- sLet f and g be any distinct category-valued features. I am arguing that although f may ~ppear inside g in some L~nguage, f will never be reqm'regto appear inside g. inals never contain gaps, WH can never contain SLASH or AGR. In point of fact, no category embeddings appear in the GKPS gram- mar for English, and it is difficult to see how they would appear in a GPSG for any other natural language. The obvious revision, then, is unit feature closure: to limit category-valued features to containing only O-level categories. (0- level categories do not contain any category-valued features). I adopt this strongly falsifiable constraint in RGPSG. The depth of category-embedding is purely an empirical issue, and hence unit closure is not ad hoe. The other revision is primarily no- tational: any RGPSG feature f may assume the distinguished values noBind or unbound in addition to those values determined by p(f). A noBSnd value indicates that the feature may not re- ceive a value in an extension of the given category, while unbound indicates that the feature does not currently have a value, and may receive one in extension. 2.3 Immediate Dominance/Linear Precedence GPSG's ID/LP format models certain word order phenomena, such as the head parameter and some free word order facts. An ID rule is a context-free production Co -'* CI,C2 .... ,C~ whose left-hand side (LHS) is the mother category and whose right-hand side (RHS) is an unordered multlset of daughter cate- gories, some of which may be designated as head daughters. The LHS immediately dominates the unordered RHS in a tree of depth one (a local tree). 2.3.1 Complexity in ID/LP ID rules significantly increase the time resources available to the GPSG derivation process in four related ways. First, a deriva- tion step is nondeterm/nistlc because a category may immediately dominate more than one RHS. Second, the derivation process may alternate between a derivation step involving the ID rules C --~ Ct [ ... I C~ that corresponds to an OR-transition (only one of k possible successors must yield a terminal string) and a derivation step involving an ID rule C ~ CI,C2,... ,Ce that corresponds to an AND-transition (all k successors must yield terminal strings). These two devices introduce lexical and struc- tural ambiguity. As is well-known, ambiguity is a central prop- erty of natural languages. Therefore, I consider this aspect of ID rules linguistically essential, and it will be retained in RGPSG. Third, unrestricted null transitions in ID rules are a source of intractability because they allow GPSGs to generate enormous phrase structure trees whose yield is the empty string (see Ristad, 1986a). Thus, a parser that used such a grammar must nonde- terministically postulate elaborate phrase structure in between its input tokens. The indisputable unnaturalness of this ability motivates me to greatly restrict null transitions in RGPSG. Fourth, the multiset RHS of an ID rule contributes to a large space of local phrase structure trees: an ID rule with s a RHS of cardinality b can, if unconstrained by LP statements, correspond to b! ordered productions. In parsing practice, this can cause a combinatorial explosion in a context-free parser's state space (see Barton, 1985). In addition to causing nondeterrninism in 244 any GPSG-based parser, the multiset RHS confers on GPSG the ability to count nonterminals. The apparent artificiality of this device, as discussed in Barton, Berwick, and Ristad (1987:260- 261), will motivate me to adopt a substantive constraint of short ID rules in RGPSG (binary branching, for example). 4 2.3.2 Revised ID/LP RGPSG ID rules have exactly one mother and at least one head daughter. The heads are separated notationally from the non- heads by a colon, and appear to the left of the colon. The mother and all head daughters are implicitly specified for [NULL -]. For example, the RGPSG headed ID rule 2 corresponds to the GPSG ID rule 3. ve --, [SUBCAT 2] : 5'e (2) Ve[NULL -] --* H[SUBCAT 2.NULL -],N,q (3) There is only one lexical element for the null string, and it is universal across all grammars: X2 [SLASH X,~I, NULL +] l ""* Co-subscripting indicates that the two X,~ categories must be identical in any legal projection of the rule, with the exception of the [NULL ÷] and SLASH specifications. This restricted ID rule format, when coupled with a restriction on metarules that pre- vents them from affecting head daughters, prevents head daugh- ters from ever being erased in a RGPSG derivation. Thus, null transitions are effectively eliminated from RGPSG. An ordered production is an ID rule whose daughters are com- pletely linearly ordered, that is, a string of daughter categories rather than multisets of head and nonhead daughters. An or- dered production is LP-occeptable if all LP statements in the RGPSG are true of it. The RGPSG ID/LP formalism does not contain formal con- straints sufficient to guarantee polynomial-time recognition, al- though the linguistically justified use of short ID rules can render ID rules tractable, because ID/LP grammars with bounded rules can be parsed in time polynomial in the grammar si~.e, s 2.4 Metarules Metarules are lexical redundancy rules. Formally, they are func- tions that take le=ical ID rules--ID rules with a lexical head--to 'The binary branching constraint is independently motivated by the lln- guistic arguments of Kayne (1981) und others. In that work, Kayne argues that the pnth from a governed category to its governor (for example, from an anaphor to its antecedent) must be unamblguou~--informally put, "an unambiguous path is a path such that, in tracing it out, one is never forced to m~.ke a choice between two (or more) unused branches, both pointing in the same direction" (Kayne 1981:146). The unambiguous path requirement sharply constrains fan-out in phra~ structure trees because n-ary branching, for n > 2, is only possible when none of the rt sister nodes must govern any other nodes in the phrase structure tree. s~ the length bound for natural language graznmars is the constant b, then any ]I)/LP grammar G cffin be converted into a strongly-equivalent CFG G ~, of sise 0(IG I . b!) = $(IGI) by simply expanding out the constant number of linear precedence po~ibilitlee. In the GKP$ and RGPSG grammars for En- glish, b = 3 becau~ double object constrnctions ([g/us NP NP], for example) are atmigued a fiat, ternary branching structure. (I ignore the iterating coor- dination schema, which licenses rules with unbounded right-hand sides.) It is important, however, that the short rules reflect a genuine constraint and that the grammar does not use some other mechanism to get the effect of longer rules (feature instantiation, for example). sets of lexical ID rules. See the GKPS passive metarule above. The GKPS grammar for English also includes metarules for subject- aux inversion, extrapusition, and transitivity alternations. The complete set of ID rules in a GPSG is the maximal set that can be arrived at by taking each metarule and applying it to the set of rules that did not themselves arise from the application of that metarule. This maximal set is called the finite closure FC(M, R) of a set R of lexical ID rules under a set At f of metarules. 2.4.1 Complexity of Metarules Metarules can increase the time and space resources available to the derivation process by introducing null transitions and ambi- guity in ID rules and by increasing the space of ID rules more than exponentially. They can also increase the cost of the projec- tion process itself: finite closure is nondeterministic (NP-hard, in fact) because metsrules are applied to ID rules nondeterministi- cally. 2.4.2 Revised Metarules Unrestricted null transitions are both linguistically and computa- tionally undesirable. Moreover, the ability of metarules to affect lexicai head daughters is in direct conflict with their linguistic purpose: ato express generalizations about the subcategorization possibilities of lexical heads, n (GKPS:59) Unrestricted metarules can destroy the relation between a phrase and its lexicai head, and thereby violate ~-theory. The first step in revising recta- rules is to restrict them to on/y affect nonhead daughters in lexical ID rules. Because of this change, metarules cannot alter the im- plicit [NULL o] specification on the head daughters. Therefore, once a category is expanded in a derivation, it must be lexlcal]y realized in the derived string. This formal constraint ensures that the empty string does not have elaborate phrase structure in RGPSG. Metarule finite closure generates many linguistically incorrect ID rules that must be excluded by other GPSG devices (FCRs, for example). The GKPS grammar for English contains six meta- rules; out of approximately 1944 possible metarule interactions in principle, only two such interactions appear to be productive (passive followed by subject-aux inversion or slash termination metarule 1).6 Therefore, the second metarule restriction adopted by RGPSG is biclosure, instead of finite closure, r SGiven a set of ,~ metarules, the number of possible metarule interactions is the number of ways to pick n or less metarules from the set, where order matters and repetitions are not allowed. That number is given by the total number of possible koeslections from the a metarules, where k v-4ries from 0 (no metarnles apply) to ~ (any combination of all metaruies apply). Thus, the number of possible interactions j'(n) is: ~-~:o (b--,)l ~ b!-e). This k not the size of metarule finite closure, because it does not consider the pouibillty of a metarnle matching an I'D rule in more than one wuy. TMetarule biclosure does not overgenerate as badly as finite closure, and thereby promotes descriptive adequacy at the expense of some explanatory power. Biclosure has an edge in descriptive economy (explanatory power) over unit closure because simpler (and less) metarules are needed with biclo- sure. Thus, the length of metarnle derivations is not totally ad hoc because it is subject to scientific criterion. 245 2.5 Principles of Universal Feature Instantiation The ID rules obtained by taking the finite closure of the mete- rules on the ID rules are proiected to local phrase structure trees. Abstractly, this process establishes the connection between those relations encoded in ID rules (for example, domination, subcate- gorization, case, modification, and predication) and the nonlocal linguistic relations. Local trees are projected from ID rules by mapping the categories in a rule into legal extensions of those categories in the projected local tree. Principles of aniverea/feature instantiation (UFI) constrain this projection by requiring categories in a local tree to agree in certain feature specifications when it is possible for them to do so. For example, the head feature convention (HFC) requires the mother to agree with all head features that the head daughters agree on, if agreement is possible. The HFC expresses ~-theory in part, requiring a phrase to be the projection of its head. It also plays a central role in the GPSG account of coordination phenomena, requiring the conjuncts in a coordinate structure to all participate in the same linguistic relations with the rest of the sentence. The two other principles of UFI are the control agreement pr/nc/ple and the foot feature principle. The control agreement principle represenm the GPSG theory of predicate- argument relations; informally, it requires predicates to agree with their arguments (for example, verb phrases must agree with their subject NPs in English). The foot feature principle pro- rides a partial account of gap-filler relations in the GPSG sys- tem, including parasitic gaps and the binding facts of reflexive and reciprocal pronouns; it plays a role strikingly similar to that of Pesetsky's (1982) path theory and Chomsky's (1986) binding and chain theories, s Informally, the foot feature principle ensures that certain syntactic information is not lost. ~Exceptional ~ fea- ture specifications are those feature specifications in an ID rule that should agree by virtue of a principle of UFI, but are unable to without changing a feature specification inherited from the ID rule. 2.5.1 Complexity of U'FI The three principles of UFI all cause intractability because they provide the derivation process with reusable space resources. First, each principle of UFI can enforce nonlocal feature agree- ment in phrase structure. Ristad (1986b) shows how this causes NP-hardnees, when coupled with lexical ambiguity or null tran- sitions. A related source of intractability is that the projection of ID rules to local trees can create an astronomical space of local trees, which in turn increases parser search space. These two sources of intractability cannot be eliminated because they are essential to GPSG's account of linguistic agreement among aThe possibility of expreuing the control agreement and foot feature prin- ciples as local constI-sints on nonlocal relations ~llm out from the central role of c-command, or equivalently unambiguous paths, in binding theory. C-command k a local relation, in fact the primary source of locality in phrase structure (see Berwick and Wexler 1982). Similarly, the possibility of encoding multiple g-sp-filler relations in one feature specification of one category corresponds to the "no crossing ~ constraint of path theory. Peeet- sky (1982:556) compares the predictions of path theory and principles of UFI when the two diverge in cases of double extraction (for example, a probls~r~ thaf~ ] know ~vho i to [~ talk to s i about ell) from coordinate structures. He concludes that ithe apparent simplicity of the slash category solution fades when more complex cases are considered." conjuncts and between predicates and their arguments, gaps and their fillers, and phrases and their lexical heads. The use of exceptional feature specifications in these princi- ples allows a derivation to reuse the space resources provided by the ID rules and theory of syntactic features. In the reduction of Ristad (1986a), head features encode an alternating Turing machine tape. The HFC is used to transfer the tape contents for an ATM configuration Co (represented by the mother) to its immediate successors C1, C2,... ,Ck (the head daughters). The configurations Co, C1 .... ,Ct have identical tapes, with the crit- ical exception of one tape square. If the HFC enforced absolute agreement between the head features of the mother and head daughters, the polynomial space ATM computation could not be simulated in this manner. 2.5.2 Universal Feature Instantiation in RGPSG Principles of universal feature instantiation in RGPSG all pre- serve a simple invariant across all ID rules. They are mono- tonic; that is, they never delete or alter existing feature spec- ifications. The head feature convention, for example, ensures that the mother agrees exactly with all head feature specifica- tions that the head daughters agree on, regardless of where the specifications come from. Principles of UFI are first applied to the ID rule output of metarule unit closure. After this initial application, each princi- ple always applies, governing the well-formedness of the ID rule extension relation. The resulting ID rules derive utterances in the language generated by the RGPSG. Head feature convention. The head feature convention en- forces the invariant that the mother is in absolute agreement with all head features on which the head daughters agree. It also requires the BAR value on a head daughter to be less than or equal to the BAR value on the mother. HEAD contains exactly those features that must be equivalent on the mother and head daughters of every ID rule. 9 HEAD = {AGR, ADV, AUX, INV, LOC, N, N'FORM, PAS, PAST, PER, PFORM, PLU, PRD, V, VFORM} Control agreement principle. The control agreement princi- ple (CAP)differs from the HFC in that it establishes equivalences (//nks) between the categories in an ID rule: when two categories are linked in an ID rule, the two categories must be identical in any legal extension of that rule. Links are calculated immedi- ately after the HFC has applied to the ID rules for the first time; once a link is established in an ID rule, it cannot be changed or undone. I° The first part of the CAP calculates control relations between categories, while the second part of the CAP establishs °In order to properly account for feature inetantiation in the binary and Rerating coordination schemata, the binary head (BHEAD) features BAR, SUB J, SUBCAT, and SLASH are considered to be head features for the purposes of the HFC in all nonlexlcal, multiply-headed ID rules. loin GI~s, only head feature specifications and inherited foot feature specificationJ determine the semantic types relewant to the definition of con- trol. RGPSG simplifies this by considering inherited feature specifications and only some head feature specifications. Alternatively, control relations could be calculated every time the HFC instantiates a feature specification. 246 links using the control relations. In all cases, linking is indicated by co-subscripting. RGPSG control relations are calculated as follows. A predi- cate is a VP or an instantiation of XP[÷PRD] such as a predicate nominal or adjective phrase. The control feature of a category C~, where C~(BAR) 7 & 0, is SLASH if C~ is specified for SLASH; other- wise, it is AGR. Control is calculated once and for all immediately after the HFC has applied to the ID rules resulting from metarule unit closure. Let f be the control feature of a category C,. Then 6', is controlled by C~ in a rule if and only if CI(f) = C2, 6'2 ~_ X2, and either the rule is Co -* C, : 6'2 (recall that 6'1 is the head daughter), or the rule is Co -'* Cs : CI,C2, and C0,CI _~ VP. The RGPSG control agreement principle states: In an ID rule r = Co -. el,..., Ci : C#+~ ..... C. • If C~ controls Ck and fk is the control feature of C~, then Ck(f~) and C~ are linked. • If there is a nonhead predicate C~ with no controller, then link C~(f~) and Co(fo), where f~ and f0 are the control features of C~ and Co, respectively. In the theory of GKPS, the control agreement principle per- forms subject-verb agreement by enforcing a control relation be- tween the two daughters of the rule 5' --, H[-SUBJ], X~ In RGPSG, this rule must be stated as S --* X~ [-SUBJ,AGR X~] : X~ if we wish to enforce the control relation between the two daugh- ters. Because control relations in RGPSG are static (never re- calculated), this control relation exists even if Xg ~ NP. Fortu- nately, no verb will ever be specified for [AGR AP] in the lexicon, and therefore any "questionable" control relations involving an Xg other than NP are ignored at the lexical insertion level. Foot feature principle. The foot feature principle (FFP) re- quires any foot feature specification instantiated on a daughter category to also be instantiated on the mother. The specifica- tion is identical to any instantiation of the same feature on other daughter categories. The FFP ensures that (1) the existence of inherited foot features on any category of an ID rule blocks instantiation of those foot features on any other component cat- egory of the rule, and (2) inherited foot features are equivalent across all component categories of the rule. This second condi- tion may be too strong. Because the empty string can be dominated only by a cate- gory of the form <*[NULL ÷, SLASH a] in RGPSG, the FFP tries to ensure that every gap will have a unique filler. Unfortunately, it is impossible to truly guarantee recoverability of deletions in RGPSG, because the FFP can only locally constrain the rule- to-tree projection, and not the ID rules themselves. This sit- uation is unavoidable in the GPSG framework, simply because SLASH does not always mark the complete path between a gap and its filler in accepted GPSG analyses. The classic example is the GPSG analysis of subject dependencies, where an S/NP is reanalyzed as a I/P, effectively deleting an NP gap in subject position. In GKPS, this operation is performed by slash termi- nation metarule 2 (GKPS:160-2): [SLASH NP] only marks the path from the filler to the mother of the reanalyzed I/P. Another example is the GKPS (pp. 150-152) analysis of missing-object constructions such as John is e~y to please. In missing-object constructions, [SLASH NP] only marks the path from the NP gap to the V~[INF]/NP dominating to please, failing to continue through the AP easlt to please to the filler Job,. Many sweep- ing changes would be necessary before the FFP would be able to strictly enforce recoverability of deletions in RGPSG. 2.6 Marking Conventions Feature co-occurrence restrictions (FCRs) and feature specifica- tion defaults (FSDs) are explicit marking conventions used in the GPSG system both to express language-particular facts and to restrict the overgeneration of other formal devices (both metarule and feature closure}. FCRs and FSDs are restrictive predicates on categories, constructed by Boolean combination of feature specifications. All legal categories must unconditionally satisfy all FCRs. All categories must also satisfy all FSDs, if it is possi- ble to do so without violating an FCR or a principle of universal feature instantiation. For example, FCR i: [INV ÷] D {[AOX +] A [VFORM FIN]) requires any category that bears the [INV ÷] feature specifica- tion to also bear the specifications [AUX ÷] and [VFORM FIN]. 2.6.1 Complexity of Marking Conventions FCRs and FSDs both provide significant resources to the GPSG projection process. First, they allow the projection process to reuse the polynomial space provided by the theory of syntactic features, because they can establish equivalences between the fea- tures in a category C and the features in a category contained in C. This ability to apply across embedded categories vastly increases the complexity of the rule-to-tree projection. To see why it is linguistically unnecessary, consider the role of embed- ded categories. A category-valued feature f expresses a nonlocal linguistic relation between a category C and the one or more cat- egories that bear the feature specification [f C]. Thus, in the linguistically relevant cases, every embedded category eventually ~surfaces" in phrase structure, where the marking conventions are free to apply. The one exception to this argument is FCR 13 in the GKPS grammar for English, which applies 'across' an embedded category. FCR 13: [FIN, AGR NP] O [AGR NP[NOM]] In RGPSG, marking conventions may not apply to or across em- bedded categories. The effect of FCR 13 is achieved in RGPSG by a combination of the simple default SD 2 in section 3.2.2 below and carefully written ID rules. Second, FCRs and FSDs of the "disjunctive consequence" form [f ~] D [fl vl] V...V [fn ~,] compute the direct ana- log of the NP-complete satisfiability problem: when several such 247 FCRs are used together, the GPSG must nondeterministically try all n featurs-value combinations. Third, the process of applying FSDs to local trees is very complex, in part because it is not informationally encapsulated. Rather than simply considering the (existing) feature specifica- tions in each target category separately, FSD application is af- fected by the other categories in the ID rule, all principles of universal feature instantiation, and even FCRs. 2.6.2 Simple Defaults in RGPSG There is no reason to believe that marking conventions need be so powerful and unconstrained. The approach RGPSG takes is to virtually eliminate marking conventions. Rather than stating the internal constraints on categories explicitly (and redundantly), as FCRs do, RGPSG eliminates FCRs altogether. Instead, the constraints FCRs express are implicitly stated in the rest of the grammar -- in the way ID rules and metarules are written, for example. The sole explicit marking convention in RGPSG is the simple defauh (SD). Unlike FCRs and FSDs, SDs are construc- tive, easy to understand and computationally tractable. Each $D is applied (and may be understood) to each category inde- pendent of all other categories and RGPSG formal devices, in- cluding other SDs. $Ds are applied to ID rules immediately after the initial application of principles of UFI. An SD contains a predicate and a consequent. The conse- quent is a list of feature specifications. The predicate is a Boolean combination of truth-values and feature specifications such that if a category C bears or extends a given feature specification, that feature specification is true of C, else false. If the predicate is true of a given category C in a rule and the consequent includes only unbound and unlinked features, then the feature specifica- tions listed in the consequent are instantiated on C. Each SD is applied simultaneously to every top-level category in every rule exactly once, in the order specified by the grammar. Consider the following SD: SD I: if [SUBCAT] then [BAR 0] If the target category C in a ID rule is specified for the SUBCAT feature, but unspecified for the BAR feature, then the SD wi|] force the feature specification [BAR 0] on C. 3 The Revised Theory In this section, I explain how the formal subsystems described above fit together. I begin by formally specifying the class of RGPSGs and the languages they generate. I conclude by trans- lating the GKPS analysis of topicalization, explicative pronouns, and parastic gaps to the RGPSG formal system. Figure 1 shows the internal organization of RGPSG. The set of ID rules R' defined by metarule unit closure, UFI, and SD application generates the language of the RGPSG as follows. If R' contains a rule A --. ~' with an extension A' --..1, that satisfies all principles of UFI and is an LP-acceptable ordered production, then for any string of terminals a and nonterminals ~, we write aA'~ =~ a'Tt~. This is a derivation step. The language of an RGPSG contains all terminal strings that can be derived, using ro s,~es R o(IRI) I Metarule UC vc(M,a) O(iRi2.1Mi) v-.d r~ R~. I O(IR?'IMI'ISl) I SDe and UFI m ,,~. ~ O(IGt') Figure I: This diagram shows internal organization of an RGPSG G with ID rules R, metarules M, and simple defaults S. The O-bounds show the effect of various formal devices on derived grammar symbol size. the ID rules, from any extension of the distinguished start cate- gory. Let =~ be the reflexive transitive closure of =~. Then the language L(G) generated by G is L(G) = { z I z e V~ and 3C • K[(C ~_ Start) ^ C =~ zl} Ristad (1986b) proves that universal recognition problem for RGPSG is NP-complete, a significant decrease in complexity from the EXP-POLY time hardness of GPSG-Recognition. xl In fact, of the more than ten sources of intractability lurking in GPSG, only two remain in RGPSG -- lexical ambiguity and nonlocal feature agreement. Critically, these two sources of in- tractability in RGPSG appear to be linguistically essential. 3.1 Efficient RGPSG Parsing Intractability in RGPSG arises from a particularly deadly com- bination of feature agreement and lexical ambiguity. Underspec- ification of categories in ID rules and metarules can be costly. This suggests that limiting the number of head features or the scope of their agreement will mitigate the intractability. An ef- ficient recognition algorithm might approximate grammaticality by failing to transfer all head features through coordinate struc- tures (for example, letting them assume default values instead), or by aborting a parse in the face of excessive lexical or struc- tural ambiguity. Ef~cient parsing techniques based on partial enforcement of UFI are also possible. One such implementation, which propagates feature specifications bottom up using Earley's algorithm, is in progress at Thinking Machines Corporation. ~This decrease in complexity ie significant from both theoretical and prac- tical perspectives. First, N'P-complete problems typically have good average time algorithms, while EXP-POLY problems do not. Next, the fastest rec- ognizer known for GPSGs can require double-exponential time in the worst case, while RGPSG has a simple exponential time recognizer. Finally, NP- complete problems have efficient witneeBes, while EXP-POLY hard problems do not. Thk means that RGPSG parses can always be verified efficiently, while GPSG parsee cannot, in gener~h 248 Barton (1986) proposes a constraint-based computational so- lution to intractability in the two-level Kinuno morphological analyzer. Intractability arises from unbounded agreement pro- cesses in that system, and similar techniques based on constraint propagation may be adapted to create an e/~cient approz~mate parsing algorithm for RGPSG. Tuples of features would corre- spond to constraint-propagation nodes, while tuples of sets of fcature-values would correspond to node labels; features could receive multiple values in this implementation. Nodes would be connected by both RGPSG ID rules and principles of universal feature instantiation. 3.2 Linguistic Analysis of English This section reproduces three of the more intricate linguistic anal- yses of GKPS in order to illustrate RGPSG's formalisms. To reproduce their comprehensive analysis of English in toto would be a disservice to that work and is beyond the scope of this paper. Instead, Ristad (1986b) provides an RGPSG roughly equivalent to their GPSG for English; the reader should consult GKPS for the accompanying linguistic exposition. In all cases, co-subscripting indicates linking. 3.2.1 Topicallzation The rule 4a expands clauses and rule 4b introduces unbounded dependency constructions (UDCs) in English. a.S--*XS[sUBJ -.AGR X2] :X~ b. S --. X8 [SUBJ *,SLASH X2] : X~ (4) In both cases the X2 nonhead daughter controls the head daugh- ter, and the control agreement principle links the value of the head daughter's control feature with the 3(2 daughter, creating the ID rules in 5. a. S --* VP[AGR X~x] : X~I b. S [SLASH noBind] .~ S [SLASH X~] :X~ [SLASH noBind]t (s) In the following discussion, [3s] and [3p] abbreviate [PER 3, -PLU] and [PER 3.+PLU], respectively. Note that it is impossible to extract any constituent out of the X~ daughter in 5b because the foot feature principle has forced [SLASH noBind] on the X~ daughter and its mother. This explains the unacceptabihty of 6 in RGPSG, which is permissible in the theory of GKPS. * New York [[ the girl from --] [ we want __ to succeed ]] (s) 3.2.2 Explicative pronouns Now I account for the distribution of the explicative pronouns it and there in infinitival constructions on the basis of postulated ID rules and principles of universal feature instantiation (see GKPS, pp.115-121). The feature specification [AGR NP[NFORM all is abbreviated as +a below, where a is it, there, or NORM. The RGPSG for English includes the ID rules 7, a. S --~ X2 [-SUBJ,AGR X~ : X2 b. VP --, [13] : VP[INF] c. VP -. [1£,] : (PP[to]), VP[INF] (7) d. VP -. [17] : NP, VP[INF] e. VP [AGR 5"] --. [20] : NP the simple defaults 8, a. SD I: if [SUBCAT] then [BAR 0] b. SD 8: ;f [+V,-N,-SUBJ] then [+NORM] (8) the extraposition metarule g, X~ [AGR S] -., W (9) X~[+it;] -. W,S and the lexical entries 10. All other nouns are specified for [NFORM NflRM] by their lexical entries. (it, NP [PRO. -PLU. NFORM it;] ) (there, NP [PRO, NFORM t;here] ) (I0) From the ID rules in 7, RGPSG generates the following ID rules. a. VP [AGRI] --~ VO [13.AGRI] : VP [INF,AGRI] b. VP[AGRI] -~ VO[16,AGRI] : (PP[to]), VP[INF,AGRI] (11) The absence of a controlling category allows the CAP to link the AGR values of the mother and VP[INF] predicate daughter. The HFC then links the AGR values of the mother and lexical head daughter. SD 1 specifies the head daughter for [BAR 0], while SD 2 cannot affect the linked AGR values. VP[AGRI NP[HORM]] --~ V0114.AGR, NP[HORM]]: V~[INF, AGR, NP[NORM]] The CAP and HFC operate identically as in 11, except that the [+NORM] specification is inherited from the ID rule 7b and prop- agated through the rule by the CAP and HFC. VP[AGR~ NP[NORM]] --. V0117,AGR2 NP[HORM]]: NPI, VP[INF, AGRt NP] (12) The NP daughter controls its VP[INF] sister, and the CAP links the AGR value of the VP to its sister NP. SD 2 specifies the mother for [+NORM], and the HFC forces this specification on the head daughter. The rules 13 introduce [+it] and [+there] specifications. Note that 13a is the result of the extraposition metarule on the ID rule 7e. a. VP[+it] -* [20] :NP, S b. VP[+it] -~ [21] :(PP[to]),S[FIN] (13) c. VP [AGR NP[*there.PLU ,~] } --* [22] : NP [PLU c~] The rules in 13 may only expand the VP daughters of the ID rules 11 and 12 in a derivation (compare their AGR values). Thus, the grammar claims that explicative pronouns only occur in utterances generated using the rules in 13, in combination with the "extending" rules 11 and 12. This describes the following facts from GKPS, p. 120. I~ {It} *There [continues [ to bother [ Lou ][ that Robin was chosen ]!! *Kim (14) *21n order to better understand these examples, associate each constituent with the ID rule that generated it. To help with this task, the main verbs and their SUBCAT values are: (continue, 18), (appear, 16), (believe, 17), (bother, 2.0), {be, f.P.). 249 *It } There [ appeared (to us) [ to be [ nothing in the park Ill *Kim (is) { } Leslie [ believed *there [ to bother [ u= ] [ that Lee lied Ill *Kim (16) {'} We [ believed there [ to be [ no flaws in the argument HI *Kim (17) 3.2.3 Parasitic gaps Simple parasitic gaps, that is, those introduced in verb phrases by lexical rules, present no problem for RGPSG because the FFP demands all instantiations of SLASH on daughters to be equal to each other and equal to the SLASH instantiation on the mother. VP/NP vo [13] NP/NP (18) PP ['to] /NP Kim wondered which models { [ had sent [ pictures of __ ] [ to __ ]] } Sandy [ had sent [ pictures of __ ] [ to Bill ]] [ had sent [ pictures of Bill ] [ to E II (19) The FFP insists nonlexical heads be instantiated for SLASH if any nonhead daughter is, thereby explaining the unacceptability of 20 and the acceptability of 21. a. * S/NP NP/NP vP (20) b. * Kim wondered which authors [[ reviewers of E ] [ always detested sushi ]] a. S/NP NP/NP VP/NP (21) b. Kim wondered which authors [[ reviewers of ~ ] [ always detested ~]] This analysis of parasitic gaps exactly follows the one presented in GKPS on matters of fact. These facts may be questionable, however. Some sentences considered acceptable in GKPS (for example, Kim wondered which models Sandy had sent pictures of to Bill and Kim wondered which authors reviewers of always de- tested) axe marginal for some native English speakers. Note that both sentences axe marked unacceptable in the GB framework because of subjacency violations. It would be instructional to identify a~nd restrict the computa- tional resources provided by the formal devices in other linguistic theories (for example, lexical-functional grammar, government- binding theory, or morphological theory). Barton, Berwick, and Ristad (1987) explores the utility of complexity analysis in other linguistic domains, although the research strategy reported here is not the focus of that work. 5 References Barton, E., 1985. On the complexity of ID/LP parsing. Compu- tational Linguistics 11(4):205-218. Barton, E., 1986. Constraint propagation in Kimrno systems. Proceedings of the ~4th Annual Meeting of the Association for Computational Linguistics. Columbia University, New York: Association for Computational Linguistics Barton, E., R. Berwick, and E. Ristad, 1987. Computational Complczity and Natural Language. Cambridge, MA: MIT Press. Berwick, R. and K. Wexler, 1982. Parsing efficiency and c- command. Proceedings of the First West Coast Conference on Formal Linguistics. Los Angeles, CA: University of Cali- fornia at Los Angeles, pp. 29-34. Chomsky, N., 1986. Knowledge of Language: Its Origins, Nature, and Use. New York: Praeger Publishers. Gazdar, G., E. Klein, G. Putlum, and I. Sag, 1985. Generalized Phrase Structure Grammar. Oxford, England: Basil Black- well. Kayne, R., 1981. Unaznbiguous paths. In Levels of Syntactic Representation, R. May and J. Koster, eds. Dordrecht: Foris Publications, pp. 143-183. Pesetsky, D., 1982. Paths and categories. Ph.D. dissertation, MIT Department of Linguistics and Philosophy, Cambridge, MA. Ristad, E.S., 1986a. Computational complexity of current GPSG theory. Proceedings of the 2~th Annual Meeting of the As- sociation for Computational Linguistics. Columbia Univer- sity, N. ew York: Association for Computational Linguistics, pp. 30-39. Ristad, E.S., 1986b. Complexity of linguistic models: a com- putational analysis and reconstruction of generalized phrase structure grammar. S.M. Thesis, MIT Department of Elec- trical Engineering and Computer Science, Cambridge, MA. Shieber, S., 1986. A simple reconstruction of GPSG. Proceed- ings of the 11th International Conference on Computational Linguistics. Bonn, West Germany, 20-22 August, 1986. 4 Conclusion This work is similar to that of Shieber (1986) in its attempt to reconstruct GPSG theory. Shieber, however, is concerned solely with creating a more easily implementable description of GPSG theory, rather than with changing the theory in a linguistically or computationally significant way. 250
1987
34
JETR: A ROBUST MACHINE TRANSLATION SYSTEM Rika Yoshii Department of Information and Computer Science University of California, Irvine, Irvine, California, 92717 t ABSTRACT This paper presents an expectation-based Japanese- to-English translation system called JETR which relies on the forward expectation-refinement process to handle ungrammatical sentences in an elegant and efficient manner without relying on the presence of particles and verbs in the source text. JETR uses a chain of result states to perform context analysis for resolving pronoun and object references and filling ellipses. Unlike other knowledge-based systems, JETR attempts to achieve semantic, pragmatic, structural and lexical invariance. INTRODUCTION Recently there has been a revitalized interest in machine translation as both a practical engineering problem and a tool to test various Artificial Intelligence (AI) theories. As a result of increased international communication, there exists today a massive Japanese effort in machine translation. However, systems ready for commercialization are still concentrating on syntactic information and are unable to translate syntactically obscure but meaningful sentences. Moreover, many of these systems do not perform context analysis and thus cannot fill ellipses or resolve pronoun references. Knowledge-based systems, on the other hand, tend to discard the syntax of the source text and thus are unable to preserve the syntactic style of the source text. Moreover, these systems concentrate on understanding and thus do not preserve the semantic content of the source text. An expectation-based approach to "Japanese-to- English machine translation is presented. The approach is demonstrated by the JETR system which is designed to translate recipes and instruction booklets. Unlike other Japanese-to-English translation systems, which rely on the presence of particles and main verbs in the source text (AAT 1984, Ibuki 1983, Nitta 1982, tThe author is now located at: Rockwell International Corp. Autonetics Strategic Systems Division Mail Code: GA42 3370 Miraloma Avenue, P.O. Box 4192 Anaheim, California 92803-4192 Saino 1983, Shimazu 1983), JETR is designed to translate ungrammatical and abbreviated sentences using semantic and contextual information. Unlike other knowledge-based translation systems (Cullingford 1976, Ishizaki 1983, Schank 1982, Yang 1981), JETR does not view machine translation as a paraphrasing problem. JETR attempts to achieve semantic, pragmatic, structural and lexical invariance which (Carbonell 1981) gives as multiple dimensions of quality in the translation process. Sends phrases, wood d~sses and phrase roles [Analyzer[ (PDA) ,~ Sends object frames Sends object framms and action frames Sends modified expectations, modified object types and filled frames Generator I Resolves Sends an~hofic object references frames I Context Analyzer I Figure 1. JETR Components JETR is comprised of three interleaved components: the particle-driven analyzer, the generator, and the context analyzer as shown in Figure 1. The three components interact with one another to preserve information contained in grammatical as well as ungrammatical texts. The overview of each component is presented below. This paper focuses on the particle- driven analyzer. CIIARACTERISTICS OF TilE JAPANESE LANGUAGE The difficulty of translation depends on the similarity between the languages involved. Japanese and English are vastly different languages. Translation from Japanese to English involves restructuring of sentences, disambiguation of words, and additions and 25 deletions of certain lexical items. The following characteristics of the Japanese language have influenced the design of the JETR system: 1. Japanese is a left-branching, post- positional, subject-object-verb language. 2. Particles and not word order are important in determining the roles of the noun phrases in a Japanese sentence. . Information is usually more explicitly stated in English than in Japanese. There are no articles (i.e. "a", "an", and "the"). There are no singular and plural forms of nouns. Grammatical sentences can have their subjects and objects missing (i.e. ellipses). PDA: PARTICLE-DRIVEN ANALYZER Observe the following sentences: Verb-deletion: Neji (screw) o (object marker) migi (right) e (direction marker) 3 kurikku (clicks). Particle-deletion: Shin (salt) keiniku (chicken) ni (destination marker) furu (sprinkle). The first sentence lacks the main verb, while the second sentence lacks the particle after the noun "shin." The role of "shin" must be determined without relying on the particle and the word order. In addition to the problems of unknown words and unclear or ambiguous interpretation, missing particles and verbs are often found in recipes, instruction booklets and other informal texts posing special problems for machine translation systems. The Particle-Driven Analyzer (PDA) is a robust intrasentence analyzer designed to handle ungrammatical sentences in an elegant and efficient manner. While analyzers of the English language rely heavily on verb-oriented processing, the existence of particles in the Japanese language and the subject- object-verb word order have led to the PDA's reliance on forward expectations from words other than verbs. The PDA is unique in that it does not rely on the presence of particles and verbs in the source text. To take care of missing particles and verbs, not only verbs but all nouns and adverbs are made to point to action frames which are structures used to describe actions. For both grammatical and ungrammatical sentences, the PDA continuously combines and refines forward expectations from various phrases to determ/ne their roles and to predict actions. These expectations are semantic in nature and disregard the word order of the sentence. Each expectation is an action-role pair of the form (<action> <role>). Actions are names of action frames while roles correspond to the slot names of action frames. Since the main verb is almost always found at the end of the sentence, combined forward expectations are strong enough to point to the roles of the nouns and the meaning of the verb. For example, consider "neji (screw) migi (right) • 3 kurikku (clicks)." By the time, "3 clicks" is read, there are strong expectations for the act of turning, and the screw expects to be the object of the act. Input: <muM> o ~ ~ <verb> (al ~e~) J2 (a3 ~$Una~) (a4 des~na~on) (al oqect) (al iN;~ument) (a3 destination) 4 Intersection: (a2 oqe~ (~ dasdna~on) (a2 desdnaton) Figure 2. Expectation Refinement in the PDA Figure 2 describes the forward expectation- refinement process. In order to keep the expectation list to a manageable size, only ten of the most likely roles and actions are attached to each word. Input:. Expectations: <noun1> m Intersection: (at ~ .(al ~j~ a2 a3 (~e~ [ (~o.) 4.. nounl ~e~t'~ (a4 deshion). / 9ene~ole tier (at oqd (~) , ,mp~ Figure 3. Expectation Mismatch in the PDA The PDA is similar to IPP (Lebowitz 1983) in that words other than verbs are made to point to structures which describe actions. However, unlike IPP, a generic role-filling process will be invoked only if an 26 unexpected verb is encountered or the forward expectations do not match. Figure 3 shows such a case. The verb will not invoke any role-filling or role-determining process ff the semantic expectations from the other phrases match the verb. Therefore, the PDA discourages inefficient verb-initiated backward searches for role-fillers even when particles are missing. Unlike LUTE (Shimazu 1983), the PDA's generic role- filling process does not rely on the presence of particles. To each slot of each action frame, acceptable filler types are attached. When particles are missing, the role-filling rule matches the object types of role fillers against the information attached to action frames. The object types in each domain are organized in a hierarchy, and frame slots are allowed to point to any level in the hierarchy. Verbs with multiple meanings are disambiguated by starting out with a set of action frames (e.g. a2 and a3) and discarding a frame if a given phrase cannot fill any slot of the frame. The PDA's processes can be summarized as follows: 1. Grab a phrase bottom-up using syntactic and semantic word classes. Build an object frame if applicable. 2. Recall all expectations (action-role pairs) attached to the phrase. 3. 4. If a particle follows, use the particle to refine the expectations attached to the phrase. Take the intersection of the old and new expectations. 5. If the intersection is empty, set a flag. 6. 7. If this is a verb phrase and the flag is up, invoke the generic role-filling process. Else if this is the end of a simple sentence, build an action frame using forward expectations. 8. Otherwise go back to Step 1. To achieve extensibility and flexibility, ideas such as the detachment of control structure from the word level, and the combination of top-down and bottom-up processing have been incorporated. SIMULTANEOUS GENERATOR Certain syntactic features of the source text can serve as functionally relevant features of the situation being described in the source text. Preservation of these features often helps the meaning and the nuance to be reproduced. However, knowledge-based systems discard the syntax of the original text. In other words, the information about the syntactic style of the source text, such as the phrase order and the syntactic classes of the original words, is not found in the internal representation. Furthermore, inferred role fillers, causal connections, and events are generated disregarding the brevity of the original text. For example, the generator built by the Electrotechnical Laboratory of Japan (Ishizaki 1983), which produces Japanese texts from the conceptual representation based on MOPs (Schank 1982), generates a pronoun whenever the same noun is seen the second time. Disregarding the original sentence order, the system determines the order using causal chains. Moreover, the subject and object are often omitted from the target sentence to prevent wordiness. Unlike other knowledge-based systems, JETR can preserve the syntax of the original text, and it does so without building the source-language tree. The generation algorithm is based on the observation that human translators do not have to wait until the end of the sentence to start translating the sentence. A human translator can start translating phrases as he receives them one at a time and can apply partial syntax- transfer rules as soon as he notices a phrase sequence which is ungrammatical in the target language. Verb Deletion: Shio o Ilikiniku hi. Mizu wa nabe hi. SaJt on ground meat. As for the water, in a poL Par~cle Deletion: Hikiniku, shio o furu. ~ Ground meat, sprinkle sail Word Order Preservation: o-kina fukai nabe ~ big deep pot fukai o-kina nabe ~ deep big pot Le~cal ~nveriance: 200 g no hikiniku o itameru. Kosho- o hikiniku ni futte susumeru. Stir-fry 200g of ground meat. Sprinkle pepper on the ground meat;, serve. 2009 no hikiniku o itameru. Kosho- o sore ni futte susumeru. Stir-fry 200g of ground meat. Sprinkle pepper on it; serve. Figure 4. Style Preservation In the Generator The generator does not go through the complete semantic representation of each sentence built by the other components of the system. As soon as a phrase is processed by the PDA, the generator receives the phrase along with its semantic role and starts generating the phrase if it is unambiguous. Thus the generator can easily distinguish between inferred information and information explicitly present in the 27 source text. The generator and not the PDA calls the context analyzer to obtain missing information that are needed to translate grammatical Japanese sentences into grammatical English sentences. No other inferred information is generated. A preposition is not generated for a phrase which is lacking a particle, and an inferred verb is not generated for a verb-less sentence. Because the generator has access to the actual words in the source phrase, it is able to reproduce frequent occurrences of particular lexical items. And the original word order is preserved as much as possible. Therefore, the generator is able to preserve idiolects, emphases, lengths, ellipses, syntax errors and ambiguities due to missing information. Examples of target sentences for special cases are shown in Figure 4. To achieve structural invariance, phrases are output as soon as possible without violating the English phrase order. In other words, the generator pretends that incoming phrases are English phrases, and whenever an ungrammatical phrase sequence is detected, the new phrase is saved in one of three queues: SAVED-PREPOSITIONAL, SAVED-REFINER, and SAVED-OBJECT, As long as no violation of the English phrase order is detected or expected, the phrases are generated immediately. Therefore, no source-language tree needs to be constructed, and no structural information needs to be stored in the semantic representation of the complete sentence. To prevent awkwardness, a small knowledge base which relates source language idioms to those of the target language is being used by JETR; however, one problem with the generator is that it concentrates too much on information preservation, and the target sentences are awkward at times. Currently, the system cannot decide when to sacrifice information preservation. Future research should examine the ability of human transla~rs to determine the important aspects of the source text. INSTRA: Tile CONTEXT ANALYZER The context analyzer component of JETR is called INSTRA (INSTRuction Analyzer). The goal of INSTRA is to aid the other components in the following ways: I. Keep track of the changes in object types and forward expectations as objects are modified by various modifiers and actions. . Resolve pronoun references so that correct English pronouns can be generated and expectations and object types can be associated with pronouns. . Resolve object references so that correct expectations and object types can be associated with objects and consequently the article and the number of each noun can be determined. 4. Choose among the multiple interpretations of a sentence produced by the PDA. . Fill ellipses when necessary so that well- formed English sentences can be generated. In knowledge-based systems, the context analyzer is designed with the goal of natural-language understanding in mind; therefore, object and pronoun references are resolved, and ellipses are filled as a by product of understanding the input text. However, some human translators claim that they do not always understand the texts they translate (Slocum 1985). Moreover, knowledge-based translation systems are less practical than systems based on direct and transfer methods. Wilks (1973) states that "...it may be possible to establish a level of understanding somewhat short of that required for question-answering and other intelligent behaviors." Although identifying the level of understanding required in general by a machine translation system is difficult, the. level clearly depends on the languages, the text type and the tasks involved in translation. INSTRA was designed with the goal of identifying the level of understanding required in translating instruction booklets from Japanese to English. A unique characteristic of instruction booklets is that every action produces a clearly defined resulting state which is a transformed object or a collection of transformed objects that arc likely to be referenced by later actions. For example, when salt is dissolved into water, the salty water is the result. When a screw is turned, the screw is the result. When an object is placed into liquid, the object, the liquid, the container that contains the liquid, and everthing else in the container are the results. INSTRA keeps a chain of the resulting states of the actions. INSTRA's five tasks all deal with searches or modifications of the results in the chain. - bgreoients - OBJ RICEV~IT 3 CUPS~ALIAS INGO OBJ WING~DJ CHICKEI~MT 100 TO 120 GRAMS~LIAS ING1 OBJ EGGV~MT 4~,LIAS ING2 OBJ BAMBOO:SHOOT~DJ BOILEDV~.MT 40 GRAMSU~IAS ING3 OBJ ONIONV~.DJ SMALL~AMT I~LIAS ING4 OBJ SHIITAKE:MUSHROOMV~DJ FRESH~AMT 2~ALIAS INGS OEJ LAVERV~MT AN APPROPRIATE AMOUNT~,LIAS ING6 OBJ MITSUBA'tAM'T A SMALL AMOUntS ING7 - the rk:e is bo]h~:l - STEP10BJ RICE~,LIAS INGOV~T I~EFPLURAL T - the chicken, onion, bamboo shoots, mushrooms and mitsuba ate cut. STEP20BJ CHICKEN'tALIAS INGI~RT '1~REF PLURAL T STEP20BJ ONION~IAS ING4~ART T STEP20BJ BAMBOO:SHOOT ~ALIAS ING3IART T~REFPLURAL T STEP2 08J SHIITAKE:MUSHROOM~ FRESHV~LIAS ING5~RT REFPLURAL T STEP20BJ MITSUBAV~J.IAS INGT~ART T Figure S. Chain or State= Used by INSTRA 28 To keep track of the state of each object, the object type and expectations of the object are changed whenever certain modifiers are found. Similarly, at the end of each sentence, 1) the object frames representing the result objects are extracted from the frame, 2) each result object is given a unique name, and 3) the type and expectations are changed if necessary and are attached to the unique name. To identify the result of each action, information about what results from the action is attached to each frame. The result objects are added to the end of the chain which may already contain the ingredients or object components. An example of a chain of the resulting states is shown in Figure 5. In instructions, a pronoun always refers to the result of the previous action. Therefore, for each pronoun reference, the unique name of the object at the end of the chain is returned along with the information about the number (plural or singular) of the object. For an object reference, INSTRA receives an object frame, the chain is searched backwards for a match, and its unique name and information about its number are returned. INSTRA uses a set of rules that takes into account the characteristics of modifiers in instructions to determine whether two objects match. Object reference is important also in disambiguating item parts. When JETR encounters an item part that needs to be disambiguated, it goes through the chain of results to find the item which has the part and retrieves an appropriate translation equivalent. The system uses additional specialized rules for step number references and divided objects. Ellipses are filled by searching through the chain backwards for objects whose types are accepted by the corresponding frame slots. To preserve semantic, pragmatic and structural information, ellipses are filled only when 1) missing information is needed to generate grammatical target sentences, 2) INSTRA must choose among the multiple interpretations of a sentence produced by the PDA, or 3) the result of an action is needed. The domain-specific knowledge is stated solely in terms of action frames and object types. INSTRA accomplishes the five tasks I) without pre-editing and post-editing, 2) without relying on the user except in special cases involving unknown words, and 3) without fully understanding the text. INSTRA assumes that the user is monolingual. Because the method refrains from using inferences in unnecessary cases, the semantic and pragmatic information contained in the source text can be preserved. CONCLUSIONS This paper has presented a robust expectation-based approach to machine translation which does not view machine translation as a testhod for AI. The paper has shown the need to consider problems unique to machine translation such as preservation of syntacite and semantic information contained in grammatical as well as ungrammatical sentences. The integration of the forward expectation- refinement process, the interleaved generation technique and the state-change-based processing has led to the construction of an extensible, flexible and efficient system. Although JETR is designed to translate instruction booklets, the general algorithm used by the analyzer and the generator are applicable to other kinds of text. JETR is written in UCI LISP on a DEC system 20/20. The control structure consists of roughly 5500 lines of code. On the average it takes only 1 CPU second to process a simple sentence. JETR has successfully translated published recipes taken from (Ishikawa 1975, Murakami 1978) and an instruction booklet accompanying the Hybrid-H239 watch (Hybrid) in addition to hundreds of test texts. Currently the dictionary and the knowledge base are being extended to translate more texts. Sample translations produced by JETR are found in the appendix at the end of the paper. REFERENCES AAT. 1984. Fujitsu has 2-way Translation System. AAT Report 66. Advanced American Technology, Los Angeles, California. CarboneU, J. G.; Cullingford, R. E. and Gershman, A. G. 1981. Steps Toward Knowledge-Based Machine Translation. IEEE Transaction on Pattern Analysis and Machine Intelligence PAMI, 3(4). Cullingford, R. E. 1976. The Application of Script- Based Knowledge in an Integrated Story Understanding System. Proceedings of COLING-1976. Granger, R.; Meyers, A.; Yoshii, R. and Taylor, G. 1983. An Extensible Natural Language Understanding System. Proceedings of the Artificial Intelligence Conference, Oakland University, Rochester, Michigan. Hybrid. Hybrid--cal. H239 Watch Instruction Booklet. Seiko, Tokyo, Japan. Ibuki, J; et. al. 1983. Japanese-to-English Title Translation System, TITRAN - Its Outline and the Handling of Special Expressions in Titles. Journal of Information Processing, 6(4): 231- 238. Ishikawa, K. 1975. Wakamuki Hyoban Okazu 100 Sen. Shufu no Tomo, Tokyo, Japan. Ishizakl, S. 1983. Generation of Japanese Sentences from Conceptual Representation. Proceedings of IJCAI-1983. Lebowitz, M. 1983. Memory-Based Parsing. Artificial Intelligence, 21: 363-404. Murakami, A. 1978. Futari no Ryori to Kondate. Shufu no Tomo, Tokyo, Japan. Nitta, H. 1982. A Heuristic Approach to English-into- Japanese Machine Translation. Proceedings of COLING-1982. 29 Saino, T. 1983. Jitsuyoka • Ririku Suru Shizengengo Shori-Gijutsu. Nikkei Computer, 39: 55-75. Schank, R. C. and Lytinen, S. 1982. Representation and Translation. Research Report 234. Yale University, New Haven, Connecticut. Shimazu, A; Naito, A. and Nomura, H. 1983. Japanese Language Semantic Analyzer Based on an Extended Case Frame Model. Proceedings of IJCAI-1983. Slocum, J. 1985. A Survey of Machine Translation: Its History, Current Status and Future Prospects. Computational Linguistics, 11(1): 1-17. Wilks, Y. 1973. An Artificial Intelligence Approach to Machine Translation. In: Schank, R. C. and Colby, K., Eds., Computer Models of Thought and Language. W. H. Freeman, San Francisco, California: 114-151. Yang, C. J. 1981. High Level Memory Structures and Text Coherence in Translation. Proceedings of LICAI-1981. Yoshii, R. 1986. JETR: A Robust Machine Translation System. Doctoral dissertation, University of California, Irvine, California. APPENDIX - EXAMPLES NOTE: Comments are surrounded by angle brackets. EXAMPLE 1 SOURCE TEXT: (Hybrid) Anarogu bu no jikoku:awase. 60 pun shu-sei. Ryu-zu o hikidashite migi • subayaku 2 kurikku mawasu to cho-shin ga 1 kaiten shire 60 pun susumu. Mata gyaku hi, hidari e subayaku 2 kurikku mawasu to cho-shin ga I kaiten shim 60 pun modoru. Ryu-zu o I kurikku mawasu tabigoto ni pitt to iu kakuninon ga dcru. TARGET TEXT: The time setting of the analogue part. The 60 minute adjustment Pull out the crown; when you quickly turn it clockwise 2 clicks, the minute hand turns one cycle and advances 60 minutes. Also conversely, when you quickly turn it counterclockwise 2 clicks, the minute hand turns one cycle and goes back 60 minutes. Everytime you turn the crown I click, the confirmation alarm "peep" goes off. EXAMPLE 2 SOURCE TEXT: (Murakami 1978) Tori no karaage. 4 ninmac. <<ingredients need not be separated by punctuation>> honetsuki butsugiri no keiniku 500 guramu jagaimo 2 ko kyabetsu 2 mai tamanegi 1/2 ko remon 1/2 ko paseri. (I). Keiniku ni sho-yu o-saji 2 o karamete 1 jikan oku. (2). Jagaimo wa yatsuwari ni shire kara kawa o muki mizu ni I0 pun hodo sarasu. <<wa is an ambiguous particle>> (3). Tamanegi wa usugiri ni shire mizu ni sarashi kyabetsu wa katai tokoro o sogitotte hate ni 3 to-bun shite kara hosoku kizami mizu ni sarasu. (4). Chu-ka:nabe ni abura o 6 bunme hodo here chu-bi ni kakeru. (5). Betsu nabe ni yu o wakashi jagaimo no rnizuko o kittc 2 fun hodo yude zaru ni agete mizuke o kiru. (6). (1) no keiniku no shirnke o kitte komugiko o usuku mabusu. (7). Jagaimo ga atsui uchini ko-on no abura ni ire ukiagatte kita ra chu-bi ni shi ~tsuneiro ni irozuitc kita ra tsuyobi ni shite kararito sasete ageami do tebayaku sukuiage agcdai ni totte abura o kiru. (8). Keiniku o abura ni ire ukiagatte kita ra yowame no chu-bi ni shite 2 fun hodo kakem naka made hi o to- shi tsuyobi ni shim kitsuneiro ni agcru. <<hi o to-shi is idiomatic>> (9). (3) no tamanegi, kyabetsu no mizuke o kiru. Kyabetsu o utsuwa ni shiite keiniku o mori jagaimo to tamanegi o soe lemon to paseri o ashirau. TARGET TEXT: Fried chicken. 4 servings. 500 grams of chopped chicken 2 potatoes 2 leaves of cabbage 1/2 onion I/2 lemon parsely (1). All over the chicken place 2 tablespoons of soy sauce; let alone 1 hour. 30 (2). As for the potatoes, after you cut them into eight pieces, remove the skin; place about 10 minutes in water. (3). As for the onion, cut into thin slices; place in water. As for the cabbage, remove the hard part; after you cut them vertically into 3 equal pieces, cut into fine pieces; place in water. (4). In a wok, place oil about 6110 full; put over medium heat. (5). In a different pot, boil hot water; remove the moisture of the potatoes; boil about 2 minutes; remove to a bamboo basket; remove the moisture. (6). Remove the moisture of the chicken of (1); sprinkle flour lightly. (7). While the potatoes are hot, place in the hot oil; when they float up, switch to medium heat; when they turn golden brown, switch to strong heat; make them crispy; with a lifter drainer, scoop up quickly; remove to a basket; remove the oil. (s). Place the chicken in the oil; when they float up, switch to low medium heat; put over the heat about 2 minutes; completely let the heat work through; switch to strong heat; fry golden brown. (9). Remove the moisture of the onion of (3) and the cabbage of (3); spread the cabbage on a dish; serve the chicken; add the potatoes and the onion; add the lemon and the parsely to garnish the dish. 31
1987
4
AN ENVIRONMENT FOR ACQUIRING SEMANTIC INFORMATION Damaris M. Ayuso, Varda Shaked, and Ralph M. Weischedel BBN Laboratories Inc. 10 Moulton St. Cambridge, MA 02238 Abstract An improved version of IRACQ (for Interpretation Rule ACQuisition) is presented. I Our approach to semantic knowledge acquisition: 1 ) is in the context of a general purpose NL interface rather than one that accesses only databases, 2) employs a knowledge representation formalism with limited inferencing capabilities, 3) assumes a trained person but not an AI expert, and 4) provides a complete environment for not only acquiring semantic knowledge, but also main- taining and editing it in a consistent knowledge base. IRACQ is currently in use at the Naval Ocean Sys- tems Center. 1 Introduction The existence of commercial natural language in- terfaces (NLI's), such as INTELLECT from Artificial Intelligence Corporation and Q&A from Symantec, shows that NLI technology provides utility as an inter- face to computer systems. The success of all NLI technology is predicated upon the availability of sub- stantial knowledge bases containing information about the syntax and semantics of words, phrases, and idioms, as well as knowledge of the domain and of discourse context. A number of systems demonstrate a high degree of transportability, in the sense that software modules do not have to be changed when moving the technology to a new domain area; only the declarative, domain specific knowledge need be changed. However, creating the knowledge bases requires substantial effort, and therefore substantial cost. It is this assessment of the state of the art that causes us to conclude that know~edge acquisition is one of the most fundamenta/ prob/ems to widespread applicability of NLI techno/ogy. This paper describes our contribution to the ac- quisition of semantic knowledge as evidenced in IRACQ (for Interpretation Rule ACQuisition), within the context of our overall approach to representation of domain knowledge and its use in the IRUS natural language system [5, 6,271. An initial version of IRACQ was reported in [19]. Using IRACQ, mappings 1The work presented here was supported under DARPA contract #N00014-85-C-0016. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessenly representing the officual policies, either expressed or implied, of the Defense Advanced Research Projects Agency or of the United States Government. between valid English constructs and predicates of the domain may be defined by entering sample phrases. The mappings, or interpretation rules (IRules), may be defined for nouns, verbs, adjectives, and prepositions. IRules are used by the semantic interpreter in enforcing selectional restrictions and producing a logical form as the meaning represen- tation of the input sentence. IRACQ makes extensive use of information present in a model of the domain, which is represented using NIKL [18, 21], the terminological reasoning component of KL-TWO [26]. Information from the domain model is used in guiding the IRACQ/user interaction, assuring that acquisition and editing yield IRules consistent with the model. Further support exists for the IRule developer through a flexible editing and debugging environment. IRACQ has been in use by non-AI experts at the Naval Ocean Systems Center for the expansion of the database of semantic rules in use by IRUS. This paper first surveys the kinds of domain specific knowledge necessary for an NLI as well as approaches to their acquisition (section 2). Section 3 discusses dimensions in the design of a semantic ac- quisition facility, describing our approach. In section 4 we describe IRules and how they are used. An ex- ample of a clause IRule definition using IRACQ is presented. Section 5 describes initial work on an IRule paraphraser. Conclusions are in section 6. 2 Kinds of Knowledge One kind of knowledge that must be acquired is lexical information. This includes morphological infor- mation, syntactic categories, complement structure (if any), and pointers to semantic information associated with individual words. Acquiring lexical information may proceed by prompting a user, as in TEAM [13], IRUS [7], and JANUS [9]. Alternatively, efforts are un- derway to acquire the information directly from on-line dictionaries [3, 16]. Semantic knowledge includes at least two kinds of information: selectional restrictions or case frame con- straints which can serve as a filter on what makes sense semantically, and rules for translating the word senses present in an input into an underlying seman- tic representation. Acquiring such selectional restric- tion information has been studied in TEAM, the Lin- guistic String Parser [12], and our system. Acquiring the meaning of the word senses has been studied by several individuals, including [11, 17]. This paper 32 focuses on acquiring such semantic knowledge using IRACQ. Basic facts about the domain must be acquired as well. This includes at least taxonomic information about the semantic categories in the domain and bi- nary relationships holding between semantic categories. For instance, in the domain of Navy decision-making at a US Reet Command Center, such basic domain facts include: All submarines are vessels. All vessels are units. All units are organizational entities. All vessels have a major weapon system. All units have an overall combat readiness rating. Such information, though not linguistic in nature, is clearly necessary to understand natural language, since, for instance, "Enterprise's overall rating" presumes that there is such a readiness rating, which can be verified in the axioms mentioned above about the domain. However, this is cleady not a class of knowledge peculiar to language comprehension or generation, but is in fact essential in any intelligent system. General tools for acquiring such knowledge are emerging; we are employing KREME [1] for ac- quiring and maintaining the domain knowledge. Knowledge that relates the predicates in the domain to their representation and access in the un- derlying systems is certainly necessary. For instance, we may have the unary predicates vessel and harpoon.capable; nevertheless, the concept (i.e., unary predicate) corresponding to the logical expres- sion ( X x) [vessel(x) & harpoon.capable(x)] may cor- respond to the existence of a "y* in the "harp* field of the "uchar" relation of a data base. TEAM allows for acquisition of this mapping by building predicates "bottom-up" starting from database fields. We know of no general acquisition approach that will work with different kinds of underlying systems (not just databases). However, maintaining a distinction be- tween the concepts of the domain, as the user would think of those concepts, separate from the organiza- tion of the database structure or of some other under- lying system, is a key characteristic of the design and transportability of IRUS. Finally, a fifth kind of knowledge is a set of domain plans. Though no extensive set of such plans has been developed yet, there is growing agreement that such a library of plans is critical for understanding narrative [20], a user's needs [22], ellipsis [8, 2]. and ill-formed input [28], as well as for following the struc- ture of discourse [14, 15]. Tools for acquiring a large collection of domain plans from a domain expert, rather than an AI expert, have not yet appeared. However, inferring plans from textual examples is un- der way [17]. 3 Dimensions of Acquiring Semantic Knowledge We discuss in this section several dimensions available in designing a tool for acquiring semantic knowledge within the overall context of an NLI. In presenting a partial description of the space of pos- sible semantic acquisition tools, we describe where our work and the work of several other significant, recently reported systems fall in that space of pos- sibilities. 3.1 Class of underlying systems. One could design tools for a specific subclass of underlying systems, such as database management systems, as in TEAM [13] and TELl [4]. The special nature of the class of underlying systems may allow for a more tailored acquisition environment, by having special-purpose, stereotypical sequences of questions for the user, and more powerful special-purpose in- ferences. For example, in order to acquire the variety of lexical items that can refer to a symbolic field in a database (such as one stating whether a mountain is a volcano), TEAM asks a series of questions, such as "Adjectives referencing the positive value?" (e.g., volcanic), and "Abstract nouns referencing the positive value?" (e.g., volcano). The fact that the field is binary allows for few and specific questions to be asked. The design of IRACQ is intended to be general purpose so that any underlying system, whether a data base, an expert system, a planning system, etc., is a possibility for the NLI. This is achieved by having a level of representation for the concepts, actions, and capabilities of the domain, the domain model, separate from the model of the entities in the under- lying system. The meaning representation for an in- put, a logical form, is given in terms of predicates which correspond to domain model concepts and roles (and are hence referred to as domain mode/ predicates). IRules define the mappings from English to these domain model predicates. In our NLI, a separate component then translates from the meaning representation to the specific representation of the un- derlying system [24, 25]. IRACQ has been used to acquire semantic knowledge for access to both a rela- tional database management system and an ad hoc application system for drawing maps, providing cal- culations, and preparing summaries; both systems may be accessed from the NLI without the user being particularly aware that there are two systems rather than one underneath the NLI. 3.2 Meaning representation. Another dimension in the design of a semantic knowledge acquisition tool is the style of the under- lying semantic representation for natural language in- put. One could postulate a unique predicate for al- most every word sense of the language. TEAM 33 seems to represent this approach. At some later level of processing than the initial semantic acquisition, a level of inference or question/answering must be provided so that the commonalities of very similar word senses are captured and appropriate inferences made. A second approach seems to be represented in TELl, where the meaning of a word sense is trans- lated into a boolean composition of more primitive predicates. IRACQ represents a related approach, but we allow a many-to-one mapping between word senses and predicates of the domain, and use a more constraining representation for the meaning of word senses. Following the analysis of Davidson [10] we represent the meaning of events (and also of states of affairs) as a conjunction of a single unary predicate and arbitrarily many binary predicates. Objects are represented by unary predicates and are related through binary relations. Using such a representation limits the kind and numbers of questions that have to be asked of the user by the semantic acquisition com- ponent. The representation dovetails well with using NIKL [18, 21], a taxonomic knowledge representation system with a formal semantics, for stating axioms about the domain. 3.3 Model of the domain One may choose to have an explicit, separate representation for concepts of the domain, along with axioms relating them. Both IRUS and TEAM have explicit models. Such a representation may be useful to several components of a system needing to do some reasoning about the domain. The availability of such information is a dimension in the design of semantic acquisition systems, since domain knowledge can streamline the acquisition process. For example, knowing what relations are allowable between concepts in the domain, aids in determing what predicates can hold between concepts men- tioned in an English expression, and therefore, what are valid semantic mappings (IRules, in our case). Our NIKL representation of the domain knowledge, the domain model, forms the semantic backbone of our system. Meaning is represented in terms of domain model predicates; its hierarchy is used for enforcing selectional restrictions and for IRule inheritance; and some limited inferencing is done based on the model. After semantic interpreta- tion is complete, the NIKL classification algorithm is used in simplifying and transforming high level mean- ing expressions to obtain the underlying systems' commands [25]. Due to its importance, the domain model is developed carefully in consultation with domain experts, using tools to assure its correctness. This approach of developing a domain model in- dependently of linguistic considerations or of the type of underlying system is to be distinguished from other approaches where the domain knowledge is shaped mostly as a side effect of other processes such as lexical acquisition or database field specification. 3.4 Assumptions about the user of the acquisition tool. If one assumes a human in the semantic acquisi- tion process, as opposed to an automatic approach, then expectations regarding the training and back- ground of that user are yet another dimension in the space of possible designs. The acquisition com- ponent of TELl is designed for users with minimal training. In TEAM, database administrators or those capable of designing and structuring their own database use the acquisition tools. Our approach has been to assume that the user of the acquisition tool is sophisticated enough to be a member of the support staff of the underlying system(s) involved, and is familiar with the way the domain is conceived by the end users of the NLI. More particularly, we assume that the individual can become comfortable with logic so that he/she may recognize the correctness of logi- cal expressions output by the semantic interpreter, but need not be trained in AI techniques. A total environ- ment is provided for that class of user so that the necessary knowledge may be acquired, maintained, and updated over the life cycle of the NLI. We have trained such a class of users at the Naval Ocean Systems Center (NOSC) who have been using the acquisition tools for approximately a year and a half. 3.5 Scope of utilities provided. It would appear that most acquisition systems have focused on the inference problem of acquiring knowledge initially and have paid relatively little atten- tion to explaining to the user what knowledge has been acquired, providing sophisticated editing facilities above the level of the internal data structures themselves, or providing consistency checks on the database of knowledge acquired. Providing such a complete facility is a goal of our effort; feedback from non-AI staff using the tool has already yielded sig- nificant direction along those lines. The tool currently has a very sophisticated, flexible debugging environ- ment for testing the semantic knowledge acquired in- dependently of the other components of the NLI, can present the knowledge acquired in tables, and uses the set of domain facts as a way of checking the consistency of what the user has proposed and sug- gesting alternatives that are consistent with what the system already knows. Work is also underway on an intelligent editing tool guaranteeing consistency with the model when editing, and on an English paraphraser to express the content of a semantic rule. 4 IRACQ The original version of IRACQ was conceived by R. Bobrow and developed by M. Moser [19]. From sample noun phrases or clauses supplied by the user, it inferred possible selectional restrictions and let the user choose the correct one. The user then had to supply the predicates that should be used in the inter- pretation of the sample phrase, for inclusion in the IRule. 34 From that original foundation, as IRUS evolved to use NIKL. IRACQ was modified to take advantage of the NIKL knowledge representation language and the form we have adopted for representing events and states of affairs. For example, now IRACQ is able to suggest to the user the predicates to be used in the interpretation, assuring consistency with the model. Following a more compositional approach, IRules can now be defined for prepositional phrases and adjec- tives that have a meaning of their own, as opposed to just appearing in noun IRules as modifiers of the head noun. Thus possible modifiers of a head noun (or nominal semantic class) include its complements (if any), and only prepositional phrases or other modifiers that do not have an independent meaning (as in the case of idioms). Analogously, modifiers of a head verb (or event class) include its complements. Adjective and prepositional phrase IRules specify the semantic class of the nouns they can modify. Also, maintenance facilities were added, as dis- cussed in sections 4.3, 4.4, and 5. 4.1 IRules An IRule defines, for a particular word or (semantic) class of words, the semantically accept- able English phrases that can occur having that word as head of the phrase, and in addition defines the semantic interpretation of an accepted phrase. Since semantic processing is integrated with syntactic processing in IRUS, the IRules serve to block a semantically anomalous phrase as soon as it is proposed by the parser. Thus, selectional restrictions (or case frame constraints) are continuously applied. However, the semantic representation of a phrase is constructed only when the phrase is believed com- plete. There are IRules for four kinds of heads: verbs, nouns, adjectives, and prepositions. The left hand side of the. IRule states the selectional restrictions on the modifiers of the head. The right hand side specifies the predicates that should be used in con- structing a logical form corresponding to the phrase which fired the IRule. When a head word of a phrase is proposed by the parser to the semantic interpreter, all IRules that can apply to the head word for the given phrase type are gathered as follows: for each semantic property that is associated with the word, the IRules associated with the given domain model term are retrieved, along with any inherited IRules. A word can also have IRules fired directly by it, without involving the model. Since the IRules corresponding to the different word senses may give rise to separate interpretations, they are carried along in parallel as the processing continues. If no IRules are retrieved, the interpreter rejects the word. One use of the domain model is that of IRule in- heritance. When an IRule is defined, the user decides whether the new IRule (the base IRule) should inherit from IRules attached to higher domain model terms (the inherited IRules), or possibly inherit from other IRules specified by the user. When a modifier of a head word gets transmitted and no pattern for it exists in a base IRule for the head word, higher IRules are searched for the pattern. If a pattern does exist for the modifier in a given IRule, no higher ones are tried even if it does not pass the semantic test. That is, inheritance does not relax semantic constraints. 4.2 An IRACQ session In this section we step through the definition of a clause IRule for the word "send *, and assume that lexical information about "send ~ has already been en- tered. The sense of "sending" we will define, when used as the main verb of a clause, specifies an event type whose representation is as follows: ( Z x) [deployment(x) & agent(x, a) & object(x, o) & destination(x, d)], where the agent a must be a commanding officer, the object o must be a unit and the destination d must be a region. From the example clauses presented by the t~ser IRACQ must learn which unary and binary predicate:. are to be used to obtain the representation above Furthermore, IRACQ must acquire the most geP.e'~ semantic class to which the variables a, o, and d ,~,=~ belong. Output from the system is shown in bold face input from the user in regular face, and comments at,.. inserted in italics. Word that should trigger this IRule: send Domain model term to connect IRule to (select-K to view the network): deployment <A: At this point the user may wish to view the domain mode/network using our graphical displaying and edi~ng facility KREME[1] to decide the correct concept that should be associated with this word (KREME may in fact be invoked at any time). The user may even add a new con- cept, which will be tagged with the user's name and date for later verification by the domain mode/ builder, who has full knowledge of the implications that adding a concept may have on the rest of the sys- tem. Alternatively, the user may omit the answer for now; in that case, IRACQ can proceed as before, and at B will present a menu of the concepts it already knows to be consistent with the example phrases the 35 user provides. Figure 1 shows a picture of the network around DEPLOYMENT.> lew Concept New Hoh Edit Rob u~ Figure 1: Network centered on DEPLOYMENT Enter an example sentence using "send": An admiral sent Enterprise to the Indian Ocean. <IRACQ uses the furl power of the IRUS parser and interpreter to interpret this sen- tence. A temporary IRule for "send" is used which accepts any modifier (it is assumed that the other words in the sentence can aJready be understood by the system.) IRACQ recognizes that an admiral is of the type COMMANDING.OFFICER, and dis- plays a menu of the ancestors of COMMANDING.OFFICER in the NIKL taxonomy (figure 2).> Choose a generalization for COMMANDING.OFFICER COMMANDING.OFFICER PERSON CONSCIOUS.BEING ACTIVE.ENTITY OBJECT THING Figure 2: Generalizations of COMMANDING.OFFICER <The user's selection specifies the case frame constraint on the logical subject of "send'. The user picks COMMANDING.OFFICER. IRACQ will per- form similar inferences and present a menu for the other cases in the example phrase as well, asking each time whether the modifier is required or optional Assume that the user selects UNIT as the logical object and REGION as the object of the preposition "to".> <B: If the user did not specify the concept DEPLOYMENT (or some other concept) at point A above as the central concept in this sense of "sending', then IRACQ would compute those unary concepts c such that there are binary predicates relating c to each case's constraint, e.g., to COMMANDING.OFFICER, REGION, and UNIT. The user would be presented with a menu of such concepts c. IRACQ would now proceed in the same way for A or B.> <IRACQ then looks in the NIKL domain model for binary predicates relating the event class (e.g., DEPLOYMENT) to one of the cases' semantic class (e.g. REGION), and presents the user with a menu of those binary predicates (figure 3). Mouse options allow the user to retrieve an explanation of how a predicate was found, or to look at the network around it. The user picks DESTINA T/ON.OF.> Which of the following predicates should relate DEPLOYMENT to REGION in the MRL?: Figure 3: LOCATION.OF DESTINATION.OF Relations between DEPLOYMENT and REGION <IRACQ presents a menu of binary predi. catas relating DEPLOYMENT and COMMANDING.OFFICER, and one relating DEPLOYMENT and UNIT. The user picks AGENT and OBJECT, raspective/y.> Enter examples using "send" or <CR> if done: <The user may provide more examples. Redundant information would be recognized automatically.> Should this IRule inherit from higher IRules? yes <A popup window allowing the user to enter comments appears. The default com- ment has the creation date and the user's name.> This is the IRule you just defined: (IRule DEPLOYMENT.4 (clause subject (is-a COMMANDING.OFFICER) head * object (is-a UNIT) pp ((pp head to pobj (is-a REGION)))) (bind ((commanding.officer.1 (optional subject)) (unit.1 object) (region.1 (optional (pp 1 pobj)))) (predicate '(destination.of *v" region.I)) (predicate '(object.of "v" unit.l)) 36 (predicate '(agent *v" commanding.officer.I)) (class 'DEPLOYMENT))) Do you wish to edit the IRule? no <The person may, for example, want to insert something in the action part of the IRule that was not covered by the IRACQ questions.> This concludes our sample IRACQ session. 4.3 Debugging environment The facility for creating and extending IRules is integrated with the IRUS NLI itself, so that debugging can commence as soon as an addition is made using IRACQ. The debugging facility allows one to request IRUS to process any input sentence in one of several modes: asking the underlying system to fulfill the user request, generating code for the underlying system, generating the semantic representation only, or pars- ing without the use of semantics (on the chance that a grammatical or lexical bug prevents the input from being parsed). Intermediate stages of the translation are automatically stored for later inspection, editing, or reuse. IRACQ is also integrated with the other acquisition facilities available. As the example session above illustrates, IRACQ is integrated with KREME, a knowledge representation editing environment. Ad- ditionally, the IRACQ user can access a dictionary package for acquiring and maintaining both lexical and morphological information. Such a thoroughly integrated set of tools has proven not only pleasant but also highly productive. 4.4 Editing an IRule If the user later wants to make changes to an IRule, he/she may directly edit it. This procedure, however, is error-prone. The syntax rules of the IRule can easily be violated, which may lead to cryptic er- rors when the IRule is used. More importantly, the user may change the semantic information of the IRule so that it no longer is consistent with the domain model. We are currently adding two new capabilities to the IRule editing environment: I.A tool that uses some of the same IRACQ software to let the user expand the coverage of an IRule by entering more example sentences. 2. In the case that the user wants to bypass IRACQ and modify an IRule, the user will be placed into a restrictive editor that assures the syntactic integrity of the IRule, and verifies the semantic information with the domain model. 5 An IRule Paraphraser An IRule paraphraser is being implemented as a comprehensive means by which an IRACQ user can observe the capabilities introduced by a particular IRule. Since paraphrases are expressed in English, the IRule developer is spared the details of the IRule internal structure and the meaning representation. The IRule paraphraser is useful for three main pur- poses: expressing IRule inheritance so that the user does not redundantly add already inherited infor- mation, identifying omissions from the IRule's linguis- tic pattern, and verifying IRule consistency and com- pleteness. This facility will aid in specifying and main- taining correct IRules, thereby blocking anomalous in- terpretation of input. 5.1 Major design features The IRute paraphraser makes central use of the IRUS paraphraser (under development), which paraphrases user input, particularly in order to detect ambiguities. The IRUS paraphraser shares in large part the same knowledge bases used by the under- standing process, and is completely driven by the IRUS meaning representation language (MRL) used to represent the meaning of user queries. Given an MRL expression for an input, the IRUS paraphraser first transforms it into a syntactic generation tree in which each MRL constituent is assigned a syntactic role to play in an English paraphrase. The syntactic roles of the MRL predicates are derived from the IRules that could generate the MRL. In the second phase of the IRUS paraphraser, the syntactic generation tree is transformed into an English sentence. This process uses an ATN gram- mar and ATN interpreter that describes how to com- bine the various syntactic slots in the generation tree into an English sentence. Morphological processing is performed where necessary to inflect verbs and ad- jectives, pluralize nouns, etc. The IRule paraphraser expresses the knowledge in a given IRule by first composing a stereotypical phrase from the IRule linguistic pattern (i.e., the left hand side of the IRule). For the "send" IRule of the previous section, such a phrase is "A commanding officer sent a unit to a region*. For inherited IRules, the IRule paraphraser composes representative phrases that match the combined linguistic patterns of both the local and the inherited IRules. Then, the IRUS parser/interpreter interprets that phrase using the given IRute, thus creating an MRL expression. Finally, the IRUS paraphraser expresses that MRL in English. Providing an English paraphrase from just the lin- guistic pattern of an IRule would be simple and unin- teresting. The purpose of obtaining MRLs for repre- sentative phrases and using the IRUS paraphraser to go back to the English is to force the use of the right hand side of the IRule which specifies the semantic 37 interpretation. In this way anomalies introduced by, for example, manually changing variable names in the right hand side of the IRule (which point to linguistic constituents of the left hand side), can be detected. 5.2 Role within IRACQ IRACQ will invoke the IRule Paraphraser at two interaction points: (1) at the start of an IRACQ session when the user has selected a concept to which to attach the new IRule (paraphrasing IRules already as- sociated with that concept shows the user what is already handled--a new IRule might not even be needed), and (2) at the end of an IRACQ session, assisting the user in detecting anomalies. The planned use of the IRule Paraphraser is il- lustrated below with a shortened version of an IRACQ session. Word that should trigger this IRule: change Domain model term to connect IRule to: change.in.readiness Paraphrases for existing IRules (inherited phrases are capitalized): Local IRule: change.in.readiness.1 "A unit changed from a readiness rating to a readiness rating" Inherited IRule: event.be.predicate.1 "A unit changed from a readiness rating to a readiness rating" {IN, AT} A LOCATION <Observing these paraphrases will assist the IRACQ user in making the following decisions: • A new CHANGE./N.READ/NESS.2 Iru/e needs to be defined to capture sentences like "the readiness of Frederick changed from C1 to C2". • Location information should not be repeated in the new CHANGE.IN.READINESS.2 /rule since it will be inherited. The/RACQ session proceeds as described in the previous example session.> 6 Concluding Remarks Our approach to semantic knowledge acquisition: 1) is in the context of a general purpose NL interface rather than one that accesses only databases, 2) employs a knowledge representation formalism with limited inferencing capabilities, 3) assumes a trained person but not an AI expert, and 4) provides a corn- plete environment for not only acquiring semantic knowledge, but also maintaining and editing it in a consistent knowledge base. This section comments on what we have learned thus far about the point of view espoused above. First, we have transferred the IRUS natural lan- guage interface, which includes IRACQ, to the staff of the Naval Ocean Systems Center. The person in charge of the effort at NOSC has a master's degree in linguistics and had some familiarity with natural lan- guage processing before the effort started. She received three weeks of hands-on experience with IRUS at BBN in 1985, before returning to NOSC where she trained a few part-time employees who are computer science undergraduates. Development of the dictionary and IRules for the Fleet Command Cen- ter Battle Management Program (FCCBMP), a large Navy application [23], has been performed exclusively by NOSC since August, 1986. Currently, about 5000 words and 150 IRules have been defined. There are two strong positive facts regarding IRACQ's generality. First, IRUS accesses both a large relational data base and an applications pack- age in the FCCBMP. Only one set of IRules is used, with no cleavage in that set between IRules for the two applications. Second, the same software has been useful for two different versions of IRUS. One employs MRL [29], a procedural first order logic, as the semantic representation of inputs; the second employs IL, a higher-order intensional logic. Since the IRules define selectional restrictions, and since the Davidson-like representation (see section 3) is used in both cases, IRACQ did not have to be changed; only the general procedures for generating quantifiers, scoping decisions, treatment of tense, etc. had to be revised in IRUS. Therefore, a noteworthy degree of generality has been achieved. Our key knowledge representation decisions were the treatment of events and states of affairs, and the use of NIKL to store and reason about axioms con- cerning the predicates of our logic. This strongly in- fluenced the style and questions of our semantic ac- quisition process. For example, IRACQ is able to propose a set of predicates that is consistent with the domain model to use for the interpretation of an input phrase. We believe representation decisions must dictate much of an acquisition scenario no matter what the decisions are. In addition, the limited knowledge representation and inference techniques of NIKL deeply affected other parts of our NLI, par- ticulariy in the translation from conceptually-oriented domain predicates to predicates of the underlying sys- tems. The system does provide an initial version of a complete environment for creating and maintaining semantic knowledge. The result has been very desirable compared to earlier versions of IRACQ and IRUS that did not have such debugging aids nor in- tegration with tools for acquiring and maintaining the 38 domain model. We intend to integrate the various acquisition, consistency, editing, and maintenance aids for the various knowledge bases even further. References 1. Abrett, G., and Burstein, M. H. The BBN Laboratories Knowledge Acquisition Project: KREME Knowledge Editing Environment. BBN Report No. 6231, Bolt Beranek and Newman Inc., 1986. 2. Allen, J.F. and Litman, D.J. "Plans, Goals, and Language'. Proceedings of the IEEE 74, 7 (July 1986), 939-947. 3. Amsler, R.A. A Taxonomy for English Nouns and Verbs. Proceedings of the 19th Annual Meeting of the Association for Computational Linguistics, 1981, 4. Ballard, Bruce and Stumberger, Douglas. Seman- tic Acquisition in TELl: A Transportable, User- Customized Natural Language Processor. Proceed- ings of The 24th Annual Meeting of the ACL, ACL, June, 1986, pp. 20-29. 5. Bates, M. and Bobrow, R.J. A Transportable Natural Language interface for Information Retrieval. Proceedings of the 6th Annual International ACM SIGIR Conference, ACM Special Interest Group on Information Retrieval and American Society for Infor- mation Science, Washington, D.C., June, 1983. 6. Bates, Madeleine. Accessing a Database with a Transportable Natural Language Interface. Proceed- ings of The First Conference on Artificial Intelligence Applications, IEEE Computer Society, December, 1984, pp. 9-12. 7. Bates, M., and Ingria, R. Dictionary Package Documentation. Unpublished Internal Document, BBN Laboratories. 8. Carberry, M.S. A Pragmatics-Based Approach to Understanding Intersentential Ellipsis. Proceedings of the 23rd Annual Meeting of the Association for Com- putational Linguistics, Association for Computational Linguistics, Chicago, IL, July, 1985, pp. 188-197. 9. Cumming, S0 and Albano, R. A Guide to Lexical Acquisition in the JANUS System. Information Sciences Institute/RR-85-162, USC/Information Sciences Institute, 1986. 10. Davidson, D. The Logical Form of Action Sen- tences. In The Logic of Grammar, Dickenson Publishing Co., Inc., 1 g75, pp. 235-245. 11. Granger, R.H. "The NOMAD System: Expectation-Based Detection and Correction of Errors during Understanding of Syntactically and Seman- tically Ill-Formed Text'. American Journal of Com- putational Linguistics 9, 3-4 (1983), 188-198. 12. Grishman, R. Hirschman, L., and Nhan, N.T. "Discovery Procedures for Sublanguage Selectional Patterns: Initial Experiments". Computational Lin- guistics 12, 3 (July-September 1986), 205-215. 13. Grosz, B., Appelt, D. E., Martin, P., and Pereira, F. TEAM: An Experiment in the Design of Trans- portable Natural Language Interfaces. 356, SRI Inter- national, 1985. To appear in Artificial Intelligence. 14. Grosz, B.J. and Sidner, C.L. Discourse Structure and the Proper Treatment of Interruptions. Proceed- ings of IJCAI85, International Joint Conferences on Artificial Intelligence, Inc., Los Angeles, CA, August, 1985, pp. 832-839. 15. Litman, D.J. Linguistic Coherence: A Plan-Based Alternative. Proceedings of the 24th Annual Meeting of the Association for Computational Linguistics, ACL, New York, 1986, pp. 215-223. 16. Markowitz, J., Ahlswede, T., and Evens, M. Semantically Significant Patterns in Dictionary Defini- tions. Proceedings of the 24th Annual Meeting of the Association for Computational Linguistics, June, 1986. 17. Mooney, R. and DeJong, G. Learning Schemata for Natural Lanugage Processing. Proceedings of the Ninth International Joint Conference on Artificial Intel- ligence, IJCAI, 1985, pp. 681-687. 18. Moser, M.G. An Overview of NIKL, the New Im- plementation of KL-ONE. In Research in Knowledge Representation for Natural Language Understanding - Annual Report, 1 September 1982 - 31 August 1983, Sidner, C. L., et al., Eds., BBN Laboratories Report No. 5421, 1983, pp. 7-26. 19. Moser, M. G. Domain Dependent Semantic Ac- quisition. Proceedings of The First Conference on Artificial Intelligence Applications, IEEE Computer Society, December, 1984, pp. 13-18. 20. Schank, R., and Abelson, R. Scripts, Plans, Goals, and Understanding. LawrenceErlbaumAs- sociates, 1977. 21. Schmolze, J. G., and Israel, D.J. KL-ONE: Semantics and Classification. In Research in Knowledge Representation for Natural Language Un- derstanding. Annual Report, 1 September 1982 - 31 August 1983, Sidner, C.L., et al., Eds., BBN Laboratories Report No. 5421, 1983, pp. 27-39. 22. Sidner, C.L. "Plan Parsing for Intended Response Recognition in Discourse". Computational Intelligence 1, 1 (February 1985), 1-10. 23. Simpson, R.L. "AI in C3, A Case in Point: Ap- plications of AI Capability". S/GNAL, Journal of the Armed Forces Communications and Electronics As- sociation 40, 12 (1986), 79-86. 24. Stallard, D. Data Modelling for Natural Language Access. The First Conference on Artificial Intelligence Applications, IEEE Computer Society, December, 1984, pp. 19-24. 39 25. Stallard, David G. A Terminological Simplification Transformation for Natural Language Question- Answering Systems. Proceedings of The 24th Annual Meeting of the ACL, ACL, June, 1986, pp. 241-246. 26. Vilain, M. The Restricted Language Architecture of a Hybrid Representation System. Proceedings of IJCAI85, International Joint Conferences on Artificial Intelligence, Inc., Los Angeles, CA, August, 1985, pp. 547-551. 27. Walker, E., Weischedel, R.M., and Ramshaw, L. "lRUS/Janus Natural Language Interface Technology in the Strategic Computing Program'. $igna/40, 12 (August 1986), 86-90. 28. Weischedel, R.M. and Ramshaw, L.A. Reflec- tions on the Knowledge Needed to Process Ill-Formed Language. In Machine Trans/a~on: Theoretica/ and Methodo/ogica/Issues, S. Nirenburg, Ed., Cambridge University Press, Cambridge, England, to appear. 29. Woods, W.A. Semantics and Quantification in Natural Language Question Answering. In Advances in Computers, M. Yovits, Ed., Academic Press, 1978, pp. 1-87. 40
1987
5
GRAMl~IATICAL AND UNGRAMMATICAL STRUCTURES IN USER-ADVISER DIALOGUES1 EVIDENCE FOR SUFFICIENCY OF RESTRICTED LANGUAGES IN NATURAL LANGUAGE INTERFACES TO ADVISORY SYSTEMS. Raymonde Gulndon Aficeoelectroni~ and Computer Technology Corporation P.O. Boz gOOlg5 Austin, Te=a., 787~0 guindonOmcc.com 1 Kelly Shuldberg University of Teza.~, Austin O/19[CC Joyce Conner ~V[icroelectronic~ and Computer Technology Corporation ABSTRACT User-adviser dialogues were collected in a typed Wizard- of-Oz study (=man-behind-the-curtain study*). Thirty-two users had to solve simple statistics problems using an un- familiar statistical package. Users received help on how to use the statistical package by typing utterances to what they believed was a computerized adviser. The observed limited set of users' grammatical and ungrammatical forms demonstrates the sufficiency of a very restricted grammar of English for a natural language interface to an advisory sys* tem. The users' language shares many features of spoken face-to-face language or of language generated under real- time production constraints (i.e., very simple forms of utterances). Yet, users also appeared to believe that the natural language interface could not handle fragmentary or informal language and users planned or edited their language to be more like formal written language (i.e., very infrequent fragments and phatics). Finally, users also appeared to believe in poor shared context between users and com- puterized advisers and referred to objects and events using complex nominals instead of faster-to-type pronouns. INTRODUCTION It has been azgued that natural language interfaces with very rich functionality are crucial to the effective use of ad- visory systems and that interfaces using formal languages, menus, or direct manipulation will not suffice (Finin, Joshi, and Webber, 1986). Designing, developing, and debugging a rich natural language interface (its parser, grammar, recovery strategies from unparsable input, etc.) are time- consuming and labor-intensive. Nevertheless, natural lan- guage interfaces can be quite brittle in the face of uncon* strained input from the user, as can be found in applications such as user-advising. One step toward a solution to these problems would be the identification of a subset of gram* matical and ungrammatical structures that correspond to the language generated by users in any user-advising situations, irrespective of the domain. This subset could be used to design a core grammar, strategies to handle ungrammatical input, and some parsing heuristics portable to any natural language interface to advisory systems. This strategy would increase the habitability of the natural language interface (Watt, 1968; Trawick, 1983) and reduce its development cost. An important feature of this restricted subset is its in- dependence from a particular domain (e.g., statistics, medicine}, making it portable. This is in contrast with another strategy which also capitalizes on restricted subsets of English, the use of sublanguages. There are naturally oc- curring subsets of English, usually associated with a par- ticular domain or trade that have been called sublanguages (Harris, 1968; Kittredge, 1982). Sublanguages are charac- terized by distinctive specialized syntactic structures, by the occurrence of only certain domain-dependent word subclasses in certain syntactic combinations, and by the inclusion of specific ungrammatical forms (Sager, 1982). However, the association of • sublanguage with a particular domain and the emphasis on syntactic-semantic co-restrictions reduce the portability of a grammar defined on such a sublanguage. This paper presents an empirical characterization of users' language in an user-advising situation for the purpose of defining a domain-independent restricted subset of gram- matical and ungrammatical structures to help design more habitable natural language interfaces to advisory systems. This paper also presents an interpretation of the factors that cause users to naturally limit themselves to a very restricted subset of English in typed communications between users and computerized advisers. We believe these factors will be found in any typed communications between users and ad- visers for the purposes of performing a primary task. Hence, the restricted subset of English should be general to any such situations. A STUDY OF USER-ADVISER DIALOGUES IN A WIT.ARDoOF-OZ SETTING METIIOD ~ PROCEDURE Thirty-two graduate students with basic statistical knowledge were asked to solve up to eleven simple statistics problems. Participants had to use an unfamillar statistical package to solve the problems. The upper window of the participants' screen was used to perform operations with the statistical package and the lower window was used to type utterances to the adviser. The participants were instructed to ask help in English from what they believed was a com- puterized adviser by typing in the help window. Tile participants' and adviser's utterances were sent to each other's monitor and the utterances were recorded and time- stamped automatically to files. INow at Automated Language Processing Systems, Provo, Utah 41 RESULTS AND COMPARISON TO OTIIER STUDIES We are reporting only a small subset of our results, those to be compared to the results of Thompson (1980) and of Chafe (~.982). The comparison is to identify the grammatical and ungrammatical specializations specific to users' language with advisory systems and to help determine what features of user-advising situations might encourage or cause such specializations of structures. Chal'e (1982) investigated Infor- mal Spoken language (i.e., dinner table conversations) and formal written language (i.e., academic papers). Thompson (1980), in her second study, compared three types of dialogues, Spoken Face-to-Face, Typed Human-Human (terminal-to-terminal) with both conversants knowing their counterpart was human, and Human-Computer using the REL natural language front-end. The task was information retrieval. The data table report two sets of data, the percentage of utterances with a particular form (e.g., one or more Frag- ments, one or more phatics) to compare to Thompson's results and the corresponding number of occurrences of this form per 1000 words to compare to Chafe's results. When numbers are omitted from the tables, the corresponding data were not collected by Thompson or Chafe. Note that the reported data are only about users' utterances, and not the adviser's utterances. We will use typed user-adviser dialogues and Wizard.of-Oz condition to refer to the data of our study. Completeness and Formality of Users' Utterances As can be seen in Table I, for completeness (i.e., fragments) and formality (i.e., phatics and and-connectors) users' utterances with advisory systems are more like Human-Computer dialogues and Formal Written language than Spoken Face-to-Face or Typed Human-Human dialogues. Table 1: Completenes~ and Formality f - [ CIs~ Humst- r? psql ~lm~m~s )'l"J|m~M 24'11, . . . . i1~k 27 74% 74 $9"& @2 I~l~la~ .lqk | It 2J 41b • $~ $$ 111441 114 ~t4 ,d~ ~tae~ler iq, I 14 ~4 Users avoided casual forms of language since they produced only 24% of fragmentary utterances, as opposed to 74% in Typed Human-Human dialogues, but similar to 19% in the Human-Computer condition. Similarly, we found 2% of utterances with phatics, as opposed to 59% in the Typed Human-Human dialogues, but similar to 4% in the Human- Computer dialogues. Likewise, Chafe [1980) found no phatics in Formal Written discourse, but .,bout 23 per 1000 words in informal speech. There is a similar finding for and- connectors. Users in the typed user-adviser dialogues seem to expect the interface to be unable to handle fragmentary input such ~s found in Informal Spoken language and planned or edlted their language to be as complete and formal as in the Human-Computer dialogues, and more complete and formal than the language in Typed Human-Human dialogues. This is the case even though the Wizard in our study hardly ever rejected or misunderstood any users' utterance, no mat- ter how fragmentary or ungrammatical it was. However, when conversants know that their counterpart is another human, their language contains a large percentage of frag- ments and phatics, even when typed. So it appears that a priori beliefs about the nature and abilities of the adviser (i.e., this is not a human) can determine the characteristics of the language produced by the user, even when task and lin- guistic performances by the adviser were not negatively af- letted by fragmentary language from the user. Ungrammatlealltles Even though users seemed to attempt to edit or plan their utterances to be more complete and formal, 31°~ of the utterances contained one or more ungrammaticalities (excluding spelling and punctuation mistakes, if included about 50~ of utterances were ungrammatical). The most frequent ungrammaticalities were Fragments (13% of ut- terances with part(s) of the utterance being one or more fragments), missing constituents (14~ of utterances with one or more determiners missing), and lack of agreement between constituents (5% of utterances). While users seemed to plan or to edit their language to be as complete and formal as in the Human-Computer dialogues, certain types of ungram- maticaiities were produced. Two possible interpretations of this finding are: I) Certain types of ungrammaticalities do not seem to be easily under the conversant's control and edited or planned to be avoided during the dialogue; 2) They correspond to a telegraphic language assumed to be under- stood by the interface. It would be interesting to find whether really two types of ungrammaticaiities exist, some that can be avoided under some planning and others that cannot be so easily avoided. However, it is unclear whether the purposeful avoidance of some ungrammaticaiities by users can be capitalized upon to reduce the need For sophisticated robust parsing us we do not know the cost from the users of avoiding certain types of un- grammaticalities. On the other hand, knowing the nature and frequency of the actual ungrammaticaiities produced by users, as they are provided by this study, Facilitates realizing robust parsing. General Syntactic Features As can be seen in Table 2, users' utterances in typed user- adviser dialogues resemble more spoken informal discourse than written formal discourse. The difference in number of occurrences per I000 words between the Wizard-of-OZ con- dition and the Informal Spoken condition is much less than the same difference between the Wizard-of-OZ and the For- mai Written conditions. Table 2: Occurrences per 1000 Words of Various Syntactic Features Wi~u~b.of-Oz (~ufe Chu4"c Inl'urmaJ bi~-ech Furmal tVnllen ~emence * ,'nk, th 9 7 ! 7-25 Pu.~vm Vuice 1.0 $.O 25.4 Cuurdir~m Cunjunctiuas 6.7 $.X 2J.S At triimliv~ .-~dj4.~ ti~ ~ 3....1 J3.$ 1.34.9 I"ir~ P~r'~n H.dcrcam 49.0 61-;5 4.6 Nomis~.~liu¢~ 11.4 9.7 S$.S I (.~:* ur N,m~imdizml Veto .7 .0 i I ~d~j~;t of N u m i ~ V~ i-2 .Ol 4.1 ,~ 0 .9 7.2 R,~ -t.~iw ~ J.S 9.? 15.11 Short, simple (g5% of our utterances were simple), active sentences, with few coordinations, few subordinations, few relative clauses, Few nominaiizations, (and deletion of deter- miners and unmarked agreement, see the section on Ungrammaticalities) characterize the language in typed user- adviser dialogues observed in our study. These same features are features of unplanned language, which are atso features or child language, which are also features of language produced under real-time production constraints (Ochs, 1079; Givon, 1970). 42 While formality and completeness of typed user-adviser dialogues resemble more Formal Written language, the general syntactic features of typed user-adviser dialogues resemble more Informal Spoken language. Formality and completeness appear to be independent properties of users' language from the general syntactic features, possibly planned independently. More important for the design of naturM language inter- faces, the observation that typed user-adviser dialogues resemble language produced under real-time production con- stralnts indicates that users are strained by typing utterances to request help to perform a primary task. This constrains the usability of natural language interfaces as interfaces to advisory systems. One needs to identify the conditions under which the benefits of obtaining help outweight the costs of typing in utterances to determine when natural language in- terfaces are effective interfaces to advisory systems. On the other hand, the natural restrictions on the language produced by the users appear generalizable to any situation where real- time production constraints exist, of which, we believe, any typed interaction to an advisory system for the purpose or performing a primary task is an instance. Features Due Specifically to the User-Advlslng Appllcatlon As can be seen in Table 3, there are less imperatives in user-advising dialogues because the user cannot request the adviser to perform a statistical operation. Moreover, we also observe a goal-directed language with frequent to infinitives (I want/need to ...) and to purpose clauses (What is the command to compute ...), much more frequent than in Infor- mal Spoken or Formal Written languages. We believe this is the only feature that appears to be specific to the advisory application, as opposed to be specific to communications un- der real*time constraints. However, the goal-directedness of the language may be specific to advisory systems for procedural tasks as opposed to more generaln information retrieval tasks. Of course, we are here excluding lexical restrictions because they are expected and uninteresting and syntactic-semantic co-restrictions because of the desire for easy portability. Table 3: Features Specific to Advising m,,,~ Cl~re CIm/e hnpcrJu~ 5.J% 19.0% Te Cmmple~'u 17.4 2.1l Ii,JII Complexity of Referring Expressions In our study, users produced mostly very simple sentence constructions, as if under real-time production constraints (e.g., users' utterances were short and 95% of them were simple (see the section on General Syntactic Features)). Nevertheless, very few pronouns occurred, 3% of utterances contained pronouns, similar to what was found in Formal Written Language, Human-Computer dialogues, and in Cohen, Pertig, ,~ Start (1982) in their typed terminal-to- terminal condition. This is surprising because pronouns are very short to type. However, there were very frequent com- plex nominals with prepositional phrases (e.g., a record of tAc li~ting of the names of the features). At least 50,C/o of the ,:set-adviser utterances had one or more prepositional phrases..-ks can be seen in Table 4, most of the structurally ambiguous prepositional attachments arc to NPs, in fact, mostly to the most contiguous/nearest NP. So, users prefer longer to type complex nominals with explicit relations be* tween contiguous NPs over faster to type pronouns, even though there is evidence that they are operating under real- ~.ime production constraints. Because pronominal noun phrases (and also deictic expressions) are so rare, it appears that users rely little on spatial context (i.e., the screen), lin- guistic context (i.e., the utterances produced so far), and task context (i.e., statistical commands typed so far) in producing referring expressions. One interpretation of this finding is that users believe that there is poor shared context between user and adviser when they do not share physical context (as in Formal Written language) or do not know the linguistic capabilities of the conversant (a~ in Human-Computer dialogues). So, while in unplanned discourse speakers rely more on the context to express propositions and use more pronouns than in planned discourse (Ochs, 1979) and while user-adviser dialogues exhibit many features of unplanned discourse, users did not capitalize on context in producing referring expressions. It appears that the referential func- tions in language can be planned independently of and are not necessarily subject to the same real-time production con- straints than the predicative and other functions of language. Again we are finding that typed user-adviser dialogues have some features of planned, Formal Written language but also have features of unplanned, Informal Spoken language. Table 4: Distribution of Propositional Attachments CtHmp~x NP 71 NP/Vp 120 Complex NP/VP !04 Nemrt~ NP t7 131 {~l~r NPs 4 NP 72 C~npk~ NP-nL~r¢~ VI e 31 ('mnpknl N P-.~hcrs S .4nll~guow | 7 ~ P .14 ,,,u~igum~ 21 Nevertheless, not only are most prepositional attach- ments to NPs to create precise description of objects, they are mostly to the most contiguous NP. This observation suggests that real-time production constraints nevertheless play some role in the production of referential expressions. Users appear to minimize resources allocated to the produc- tion of referentiM expressions by reducing short-term memory load by attachments to the lowest, most recent NF. This interpretation is supported by studies that show that it is easier to process right-branching structures than left- branching ones (Yngve, 1060). The finding that most prepositional phrases attach to NPs rather than VPs and moreover attach most often to the lowest, nearest NP is important for the semantic interpreta- tion of sentences because of the combinatorial explosion of possible attachments of prepositional phrases. DISCUSSION Users' utterances in typed user-adviser dialogues, when the users believe that the adviser is computerized, resemble Informal Spoken speech, except for referring expressions (i.e., frequent complex nominals) and for completeness and for- mality (i.e., few phatics and and-connectors, and relatively few fragments), in which case they resemble more Formal Written language. We would like to hypothesize that the grammatical and ungrammatical forms observed occur be- cause the communicative context and the application induce certain user's beliefs and goals and induce certain processing constraints which determine the most effective syntactic forms to communicate verbally. The communicative context describes dimensions of the situation in which the discourse is generated that are believed to affect the form of the dis- course. Examples of dimensions arc: interaction, the extent to which user and adviser can quickly interact, respond to each other; involvement, the extent to which the communica- tion is directed specifically to one person as opposed to an anonymous class of persons; spatial commonality, the degree to which the conversants see each other, see the same physi- cal environment, and know that they share this environment perceptually. As can be seen in Table 5, typed user-adviser dialogues in a Wizard-of-Oz setting are more similar to int'or- • real Spoken language on dimensions of interaction and in- 43 volvement, but more similar to Formal Written language on the dimension of spatial commonality. We would like to hypothesize that different values on these dimensions are as- sociated with different restricted languages produced by the users. Findings from Biber (1980) help support this hypothesis. He performed a factor analysis on 545 text samples. He uncovered the following three dimensions: • INTERACTIVE vs. Edited: High personal in- volvement and real*time constraints. • SITUATED vs. Abstract contexts: Reliance on external situation, concrete vs. detached and deliberate. • IMMEDIATE vs. Reported: Reference to a cur- rent situation vs. removed or past situation. Table ,5: Communicative Context Parameters Im**fmsl ~ Termmel-ia-T~m,dud wLL~rd id Ui Feesa~ wnuel M~ily ~ *rims ,,nt~m m-.Im Ioternct~= hiO *an *w~ ~hmr~l k~iewk~lce *N ,= *~, a*m~*m ** ~ ~ *~ amm*m Lm ** toga From the set of features reported by Biber that loaded highly on the three dimensions, user-adviser dialogues had features of both interactive texts (e.g., many Wh-questions, many first person references, final prepositions) and edited texts (e.g., few phatics). This is because user-adviser dialogues, while written by users uncertain about the interface's ability to handle fragmentary and informal input, have a high degree of interaction and involvement of the conversants. The syntactic features observed in user-adviser dialogues overlapped greatly with the features of sltuated texts (e.g., few passives and nominalizations), except for the frequent use of complex nominals and unfrequent use of pronouns an deictic expressions, and of Immedlate texts (e.g., use of present tense, few third person pronouns). The complexity of referring expressions uncovers a dimension not revealed in Biber's work: the degree of believed shared knowledge by the converSants. Our users seemed to ~ssume poor shared knowledge and relied on complex referring ex- pressions to insure successful communication. Another dimension is the conversants' belief in the ability of their counterpart to handle fragmentary or informal language. Informal Spoken face-to-face language is often unplanned, interactive, situated, immediate, and subject to real-time production constraints. So are users' typed utterances to ad- visory systems. However, unlike Informal Spoken face-to* face language, users believe that there is poor shared context between conversants and rely little on context in producing referring expressions and users do not assume that the inter- face can handle fragmentary or informal language. We would like to conclude by making the hypothesis that any typed terminal-to*terminal user-adviser dialogues will be similar to Informal Spoken language, as wiLs observed in our study, because they are under the same communicative con- text and application. This provides a subset of grammatical and ungrammatical forms that can be used to define a core grammar portable to most user-advising situations, irrespec- tive of the domain. On the other hand, the complexity of referring expressions and the degree of completeness and for- mality of language may differ according to the users' beliefs about the linguistic capabilities of the interface. , ACKNOWLEDGEMENTS We wish to thank Elaine Rich, Kent Wittenburg, and Gregg Whittemore for useful comments on this research project. We also thank Sherry Kalin, Hans Brunner, and Gregg Whittemore for their help in collecting or analyzing the dialogues between users and adviser. REFERENCES Biber, D. (1986). Spoken. and written textual dimensions in English. Lan9uage, 6~ (2), 384-414. Chafe, W.L. (1982). Integration and involvement in speaking, writing, and oral literature. In D. Tannen (Ed.), Spokevx and written language: Ezploring orality and literacy.. Norwood, N J: Ablex. Cohen, P.R., Pertig, S., .~" Starr, K. (1982). Dependencies of discourse structure on the modality of communication: Telephone vs. teletype. Proceedings of the ~Oth Annual ,~[eeting of the Aaaoclation for Gomputatlonal Linguistics. University of Toronto, Ontario, Canada. Finin, T.W., Joshl, A.K., ~ Webber, B.L. (1986). Natural language interactions with artificial experts. Proceedings of the IEEE, 7J, 7, 921-938. Givon, T. (1979). From discourse to syntax: Grammar a~ a processing strategy. In T. Givon (Ed.), Syntaz and Semantics: Discourse and syntaz. New York: Academic Press. Grishman, R., Hirshman, L., .~ Nhan, N.T. (1086) Dis- covery Procedures for Sublanguags Selectional Patterns: In- itial Experiments. Computational Linguistics, I~3). Haxris, Z.S. (1968). A[athematical Structures in Language. New York: Wiley (Interscience). Kittredge, R. (1982). Variation and Homegeneity of Sub- languages. In R. Kittredge ~ J. Lehrberger (Eds.), Sublanguage: Studies of Language in Restricted Semantic Domains. New York: Walter de Gruyter ~ Co. Ochs, E. (1979). Planned sad unplanned discourse. In T. Givon (Ed.), Syntaz and Sevnantlc$: Discourse and syntaz. New York: Academic Press. Sager, N. (1982). Syntactic Formatting of Science Infor- mation. In R. Kittredge -~ J. Lehrberger (Eds.), Sublanguage: Studies of Language in Restricted Semantic Domains. New York: Walter de Gruyter 2~ Co. Thompson, B.H. (1980). Linguistic analysis of natural language communication with computers. Proceedings of the ~h International Conference on Computational Linguistics. Tokyo, Japan. Trawick, D.J. (1983). Robust Sentence Analysis and Habitability. Doctoral Dissertation, California Institute of Technology, Pasadena. Watt, W.C. (1968). Habitability. American Documen. ration, /g(3), 338-351. Yngve, V. (1980). A model and an hypothesis for lan- guage structure. Proceddings of the American Philosophical Society. 44
1987
6
An Attribute-Grammar Implementation of Government-bindlng Theory Nelson Correa Department of Electrical and Computer Engineering Syracuse University 111 Link Hall Syracuse, NY 13244 ABSTRACT The syntactic analysis of languages with respect to Government-binding (GB) grammar is a problem that has received relatively little attention until recently. This paper describes an attribute grammar specification of the Government-binding theory. The paper focuses on the description of the attribution rules responsible for determining antecedent-trace relations in phrase-structure trees, and on some theoretical implications of those rules for the GB model. The specification relies on a transformation-lem variant of Government-binding theory, briefly discussed by Chomsky (1981), in which the rule move-a is replaced by an interpretive rule. Here the interpretive rule is specified by means of attribution rules. The attribute grammar is currently being used to write an English parser which embodies the principles of GB theory. The parsing strategy and attribute evaluation scheme are cursorily described at the end of the paper. Introduction In this paper we consider the use of attribute gram- mars (Knuth, 1968; Waite and Goos, 1984) to pro- vide a computational definition of the Government- binding theory layed out by Chomsky (1981, 1982). This research thus constitutes a move in the direc- tion of seeking specific mechanisms and realizations of universal grammar. The attribute grammar pro- vides a specification at a level intermediate between the abstract principles of GB theory and the partic- ular automatons that may be used for parsing or generation of the language described by the theory. Almost by necessity and the nature of the goal set out, there will be several arbitrary decisions and details of realization that are not dictated by any particular linguistic or psychological facts, but perhaps only by matters of style and possible com- putational efficiency considerations in the final pro- duct. It is therefore safe to assume that the partic- ular attribute grammar that will be arrived at admits of a large number of non-isomorphic vari- ants, none of which is to be preferred over the oth- ers a priori. The specification given here is for English. Similar specifications of the parametrized grammars of typologically different languages may eventually lead to substantive generalizations about the computational mechanisms employed in natural languages. The purpose of this research is twofold: First, to provide a precise computational definition of Government-binding theory, as its core ideas are generally understood. We thus begin to provide an answer to criticisms that have recently been leveled against the theory regarding its lack of formal expli- citness (Gazdar et aI., 1985; PuUum, 1985). Unlike earlier computational models of GB theory, such as that of Berwick and Weinberg (1984), which assumes Marcus' (1980) parsing automaton, the attribute grammar specification is more abstract and neutral regarding the choice of parsing'auto- mata. Attribute grammar offers a language specification frsxnework whose formal properties are generally well-understood and explored. A second and more important purpose of the present research is to provide an alternate and mechanistic charac- terization of the principles of universal grammar. To the extent that the implementation is correct, the principles may be shown to follow from the sys- tem of attributes in the grammar and the attribu- tion rules that define their values. The current version of the attribute grammar is presently being used to implement an English parser written in Prolog. Although the parser is not yet complete, we expect that its breath of coverage of the language will be substantially larger than that of other Government-binding parsers recently reported in the literature (Kashket (1986), Kuhns (1986), Sharp (1985), and Wehrli (1984)). Since the parser is firmly based on Government-binding theory, we expect its ability to handle natural language phenomena to be limited only by the accu- racy and correctness of the underlying theory. In the development below I will assume that the reader is familiar with the basic concepts and terminology of Government-binding theory, as well as with attribute grammars. The reader is referred to Sells (1985) for a good introduction to the 45 relevant concepts of GB theory, and to Waite and Goos (1984) for a concise presentation on attribute grammars. The Grammatical Model Asstuned For the attribute grammar specification we assume a transformation-less variant of Government- binding theory, briefly discussed by Chomsky (1981, p.89-92), in which rule move-a is eliminated in favor of a system Ma of interpretive rules which deter- mines antecedent-trace relations. A more explicit propceal of a similar nature is also made by Koster (1978). We assume a context-free base, satisfying the principles of X'-theory, which generates directly structure trees at a surface structure level of representation. S-structure may be derived from surface structure by application of Ma. The rest of the theory remains as in standard Government- binding (except for some obvious reformulation of principles that refer to Grammatical Functions at D-Structure). The grammatical model that obtains is that of (1). The base generates surface structures, with phrases in their surface places along with empty categories where appropriate. Surface structure is identical to S-structure, except for the fact that the association between moved phrases and their traces is not present; chain indices that reveal history of movement in the transformational account are not present. The interpretive system Ma, here defined by attribution rules, then applies to construct the absent chains and thus establish the linking rela- tions between arguments and positions in the argu- ment structures of their predicates, yielding the S- structure level. In this manner the operations form- erly carried out by transformations reduce to attri- bute computations on phrase-structure trees. (1) Context-free base I Surface structure ]Ma S-Structure / \ PF LF Interpretive Rule I sketch briefly how the interpretive system M~ is defined. Two attributes node and Chain are associ- ated with NP, and a method for functionally classi- fying empty categories in structure trees is developed (relying on conditions of Government and Case-marking). In addition, two attributes A-Chain and A-Chain are defined for every syntactic category which may be found in the c-command domain of NP. In particular, A-Chain and A'- Chain are defined for C, COMP', S, INFL', VP, and V' (assuming Chomsky's (1986) two-level X'- system). The meanings attached to these attributes are as follows. Node defines a preorder enumeration of tree nodes; Chain is an integer that represents the syntactic chain to which an NP belongs; A -Chain (A-Chain) determines whether an argu- ment (non-argument) chain propagates across a given node of a tree, and gives the number of that chain, if any. Somewhat arbitrarily, and for the sake of concreteness, we assume that a chain is identified by the node number of the phrase that heads the chain. For the root node, the attribution rules dic- tate A-Chain ~- X-Chain -~ O. The two attri- butes are then essentially percolated downwards. However, whenever a lexical NP or PRO is found in a 8-position, an argument chain is started, setting the value of A-Chain to the node number of the NP found, which is used to identify the new chain. Thus NP traces in the c-command domain of the NP are able to identify their antecedent. Similarly, when a Wh-phrase is found in COMP specifier posi- tion, the value of A-Chain is set to the chain number of that phrase, and lower Wh-traces may pick up their antecedent in a similar fashion. Downwards propagation of the attributes A-Chain and A-Chain explains in a simple way the observed c-command constraint between a trace and its antecedent. The precise statement of the attribution rules that implement the interpretive rule described is given in Appendix A. In the formulation of the attribution rules, it is assumed that certain other components of Government-binding theory have already been implemented, in particular parts of Government and Case theories, which contribute to the functional determination of empty categories. The implementation of the relevant parts of these subtheories is described elsewhere (Correa, in preparation). We assume that all empty categories are base-generated, as instances of the same EC [#p e ]. Their types are then determined structur- ally, in manner similar to the proposal made by Koster (1978). The attributes empty, pronominal, and anaphoric used by the interpretive system achieve a full functional partitioning of NP types (van Riemsdijk and Williams (1986), p.278); their 46 values are defined by attribution rules in Appendix B, relying on the values of the attributes Governor and Caees. The values of these attributes are in turn determined by the Government and Case theories, respectively, and indicate the relevant governor of the NP and grammatical Case assigned to it. The claim associated with the interpretive rule, as it is implemented in Appendix A, is that given a eur]'aee etr~eture in the sense defined above, it will derive the correct antecedent-trace relations after it applies. An illustrative sample of its opera- tion is provided in (3), where the (simplified) struc- ture tree of sentence (2) is shown. The annotations superscripted to the C, COMP', S, INFL', VP, and V' nodes are the A-Chain and A-Chain attri- butes, respectively. Thus, for the root node, the value of both attributes is zero. Similarly, the superscripts on the NP nodes represent the node and Chain attributes of the NP. The last NP in the tree, complement of 'love', thus bears node number 5 and belongs to Chain 1. Some Theoretical Implications: Bounding Nodes and Subjaeency In Government-binding theory it is assumed that the set of bounding nodes that a language may select is not fixed across human languages, but is open to parametric variation. Rizzi (1978) observed that in Italian the Subjacency condition is systemat- ically violated by double Wh-extraction construc- tions, as in (4.a), if one assumes for Italian the same set of bounding nodes as for English. The analogous construction (4.b) is also possible in Spanish. A solution, considered by Rizzi to explain the gram- maticality of (4), is to assume that in Italian and Spanish, COMP specifier position may be "doubly filled" in the course of a transformational deriva- tion, while requiring that it be not doubly filled (by non-empty phrases) at S-Structure. Thus both moved phrases 'a cui' and 'the storie' can move to the lowest COMP position in the first transforma- tional cycle, while in the second cycle 'a cui' may move to the next higher COMP and 'che storie' stays in the first COMP. (2) Who~ did Johny seem [ e, [ ej to love e,] (3) c(e,o) Np(m) COMP1 (o,1) Who, COMP S (~1) did Np(~=) INFL I (2,1) John2 INFL VP (2'1) I V ~ (2,1) V C (2'1) { seem Np(~n COMP~ (zn COMP S (zl) el l',,II:, ('-,2) INFL I i e2 (0,1) INFL VP (°'1) I I to V I (o,1) V NP (6'1) I I love el 47 A second solution, which is the one adopted by Rizzi and constitutes the currently accepted explanation of the (apparent) Subiacency violation, is to assume that Italian and Spanish select C and NP as bounding nodes, a set different from that of English. The first phrase 'che storie' may then move to the lowest COMP position in the first transformational cycle, while the second, 'a cui', moves in the next cycle in one step to the next higher position, crossing two S nodes but, crucially, only one C node. Thus Subjaceney is satisfied if C, not S, is taken as a bounding node. (4) a. Tuo fratello, [a eui]i mi domando [che storie]~ abbiano raccontato e i el, era molto preoccupato. Your brother, to whom I wonder what stories they have told, was very worried. b. Tu hermano, [a quien]i me pregunto [que historias]i le habran contado ej el, estaba muy preocupado. The empirical data that arguably distin- guishes between the two proposed solutions is (5.a). While the "doubly filled" COMP hypothesis allows indefinitely long Wh-chains with doubly filled COMPs, making it possible for a wh-chain element and its successor to skip more than one COMP posi- tion that already contains some wh-phrase, the "bounding node" hypothesis states that at most one filled COMP position may be skipped. Thus, the second hypothesis, but not the first, correctly predicts the ungrammaticality of (5.a). (5) a. * Juan, [a quien]i no me imagino [cuanta gente]i ej sabe donde~ han mandado el ek, desaparecio ayer. Juan, whom I can't imagine how many people know where they have sent, disappeared yes- terday. b. La Gorgona, [a donde]i no me imagino [cuanta gente]j ej sabe [a quienes], han mandado et el, es una bella isla. La Gorgona, to where I can't imagine how many people know whom they have sent, is a beautiful island. One mi~t observe, however, that (5.a), even if it satisfies subjacency, violates Peseteky's (1982) Path Containment Condition (PCC). Thus, on these grounds, (5.a) does not decide between the two hypotheses. The grammaticality of (5.b), on the other hand, which is structurally similar to (5.a) but satisfies the PCC, argues in favor of the "doubly filled" COMP hypothesis. The wh-phrase 'a donde' moves from its D-Structure position to the surface position, skipping two intermediate COMP posi- tions. This is possible if we assume the doubly filled COMP hypothesis, and would violate Subjacency under the alternate hypothesis, even if C is taken as the bounding node. We expect a similar pattern (5.b) to be also valid in Italian. Movement across doubly filled COMP nodes, satisfying Pesetsky's (1982) Path Containment Con- dition, may be explained computationally if we assume that the type of the A -Chain attribute on chain nodes is a last-in/first, out (lifo) stack of integers, into which the integers identifying ,~-chain heads are pushed as they are first encountered, and from which chain identifiers are dropped as the chains are terminated. If we further assume that the type of the attribute is universal, we may explain the typological difference between Italian and English, as it refers to the Subjacency condi- tion, by assuming the presence of an A-Chain atack depth bound, which is parametrized by univer- sal grammar, and has the values 1 for English, and 2 (or possibly more) for Italian and Spanish. To conclude this section, it is worth to review the manner in which the subjacency facts are explained by the present attribute grammar imple- mentation. Notice first that there is no particular set of categories in the theory that have been declared as Bounding categories. There is no special procedure that checks that the Subjacency condi- tion is actually satisfied by, say, traversing paths between adjacent chain elements in a tree and counting bounding nodes. Instead, the facts follow from the attribution rules that determine the values of the attributes A-Chain and X-Chain. This can be verified by inspection of the possible cases of movement. Thus, NP-movement is from object or INFL specifier position to the nearest INFL specifier which c-commands the extraction site. Similarly, Wh- movement is from object, INFL specifier, or COMP specifier position to the nearest c-commanding COMP specifier. If the bound on the depth of the A-Chain stack is 1, either S or COMP' (but not both) may be taken as bounding node, and Wh- island phenomena are observable. If the bound is 2 or greater, then C is the closest approximation to a bounding node (although cf. (5.b)), and Wh-island violations which satisfy the PCC are possible. NP is a bounding node as a consequence of the strong condition that no chain spans across an NP node, which in turn is a consequence of the rules (ii.e) in Appendix A. 48 Parser Implementation A prototype of the English parser is currently being developed using the Prolog logic programming language. As mentioned in the introduction, the attribute grammar specification is neutral regarding the choice of parsing automaton. Thus, several suitable parser construction techniques (Aho and Ullman, 1972) may be used to derive a parser. The context-free base used by the attribute grammar is an X'-grammar, essentially as in Jackendoff (1977), although some modifications have been made. In particular, following Chomsky (1986) we assume that maximal projections have uniformly bar-level 2 and that S is a projection of INFL, not V, as Jack- endoff assumes. The base, due to left-recursion in several productions, is not LR(k), for any k. We have developed a parser which is essen- tially LL(1), and incorporates a stack depth bound which is linearly related to the length of the input string. Prolog's backtracking mechanism provides the means for obtaining alternate parses of syntacti- cally ambiguous sentences. The parser performs rea- sonably well with a good number of constructions and, due to the stack bound, avoids potentially infinite derivations which could arise due to the application of mutually recursive rules. Attributes are implemented by logical variables which are asso- ciated with tree nodes (cf. Arbab, 1986). Most attri- butes can be evaluated in a preorder traversal of the parse tree, and thus attribute evaluation may be combined with LL(1) parser actions. Notable excep- tions to this evaluation order are the attributes Governor, Cases, and Os associated with the NP in INFL specifier position. The value of these attri- butes cannot be determined until the main verb of the relevant clause is found. Conclusions We have presented a computational specification of a fragment of Government-binding theory with potentially far-reaching theoretical and practical implications. From a theoretical point of view, the present attribute grammar specification offers a fairly concrete framework which may be used to study the development and stable state of human linguistic competence. From a more practical point of view, the attribute grammar serves as a Starting point for the development of high quality parsers for natural languages. To the extent that the specification is explanatorily adequate, the language described by the grammar (recognized by the parser) may be changed by altering the values of the universal parameters in the grammar and changing the underlying lexicon. Acknowledgements I would like to thank my dissertation advisor, Jaklin Kornfilt, for helpful and timely advise at all stages of this research. Also, I wish to thank an anonymous ACL reviewer who pointed out the simi- laxity of the grammatical model I assume to that proposed by Koster (1978), Mary Laughren and Beth Levin for their discussion and commentary on related aspects of this research, Ed Barton, who kindly made available some of the early literature on GB parsing, Mike Kashket for some critical com- ments, and Ed Stabler for his continued support of this project. Support for this research has been pro- vided in part by the CASE Center at Syracuse University. References Aho, A.V., and J.D. Ullman. 1972. The Theory of Parsing, Translation and Compiling. Prentice-Hall, Englewood Cliffs, NJ Arbab, Bijan. 1986. "Compiling Circular Attribute Grammars into Prolog." IBM Journal of Research and Development, Vol. 30, No. 3, May 1986 Berwick, Robert and Amy Weinberg. 1984. The Grammatical Basis of Linguistic Perfor- mance. The MIT Press. Cambridge, MA Chomsky, Noam. 1981. Lectures on Government and Binding. Foris Publications. Dordreeht Chomsky, Noam. 1982. Some Concepts and Conse- quences of the Theory of Government and Binding. The MIT Press. Cambridge, MA Chomsky, Noam. 1986. Barriers. The MIT Press. Cambridge, MA Correa, Nelson. In preparation. Syntactic Analysis of English with respect to Government- binding Grammar. Ph.D. Dissertation, Syra- cuse University Gazdar, Gerald, Ewin Klein, Geoffrey Pullum, and Ivan Sag. 1985. Generalized Phrase Structure Grammar. Harvard University Press. Cam- bridge, MA Jaekendoff, Ray. 1977. X Syntaz: A Study o/ Phrase Structure. The MIT Press. Cambridge, MA Kashket, Michael. 1986. "Parsing a Free-word Order Language: Walpiri." Proceedings of the 24th Annual Meeting o/ the Association /or 49 Computational Linguistics, p.60-66. Knut:h, Donald E. 1968. "Semantics of Context-free Languages." In Mathematical Systems Theory, Vol. 2, No. 2, 1968 Koster, Jan. 1978. "Conditions, Empty Nodes, and Markedness." Linguistic Inquiry, Vol. 9, No. 4. Kuhns, Robert. 1986. "A PROLOG Implementation of Government-binding Theory." Proceedinge of the Annual Conference of the European Chapter of the Association for Computational Linguistics, p.546-550. Marcus, Mitchell. 1980. A Theory of Syntactic Recognition for Natural Language. The MIT Press. Cambridge, MA Pesetsky, D. 1982. Paths and Categories. Ph.D. Dissertation, MIT Pullum, Geoffrey. 1985. "Assuming Some Vemion of the X-bar Theory." Syntax Research Center, University of California, Santa Cruz Rizzi, Luigi. 1978. "Violations of the Wh-lsland Constraint in Italian and the Subjacency Condition." Montreal Working Papers in Linguistics 11 Sells, Peter. 1985. Lectures on Contemporary Syn- tactic Theories. Chicago University Press. Chicago, Illinois Sharp, Randall M. 1985. A Model of Grammar Baaed on Principles of Government and Bind- ing. M.Sc Thesis, Department of Computer Science, University of British Columbia. October, 1985 Van Riemsdijk, Honk and Edwin Williams. 1986. An Introduction to the Theory of Grammar. The MIT Press. Cambridge, MA Waite, William M. and Gerhard Coos. 1984. Com- piler Construction. Springer-Verlag. New York Wehrli, Erie. 1984. "A Government-binding Parser for French." Institut pour les Etudes Seman- tiques et Cognitives, Universite de Geneve. Working Paper No. 48 Appendix A: The Chain Rule i. General rule and condition attributior~: NP.Chain .-- if NP.empty ---- '-' then NP.node else if NP.pronominal -- '+' then NP.node else if NP.anaphoric = '+' then NP.A-Chain else N'P.A- Chain condition: NP.Chain # 0 ii. Productions a. Start production Z-*C attribution: C.A-Chain *-- 0 C.X-Chain ,-- 0 b. COMP productions C --, COMP' attribution: COMP'.x ~ C.x, for x = A-Chain, X-Chain condition: C.A-Chain = 0 " C~NP COMP' ottribution: NP.x *- C.x, for x ~ A-Chain, ~-Chain COMP'.A-Chain ,-- C.A-Chain COMP'.A-Chain ~- NP.Chain condition: NP.Wh = '+' COMP' --* COMP S attribution: S.x *-- COMF'.x, for x ---- A-Chain, A -Chain e. INFL productions S ~ NP INFL' attribution: NP.x ~- S.x, for x = A-Chain, A-Chain INFL'.A-Chain if NP.as = 'nil' then NP.Chain else 0 INFL'A -Chain *-- if NP.Chain = S.X-Chain then 0 else S.A-Chain 50 INFL' --* INFL VP attribution: VP.x *- INFL'.x, for x =- A-Chain, A -Chain d. V productions VP--. V' attribution: V'.x *-- VP.x, for x ----- A-Chain, A -Chain V'--* V NP attribution: NP.x *-- V'.x, for x -~ A-Chain, .W.-Chain V'---, V C attribution: C.x *-- V'.x, for x ---- A-Chain, A -Chain V'--* V NP C attribution: NP.x *-- V'.x, for x ---- A-Chain, A-Chain C.A-Chain *-- 0 C7, -Chain if NP.Chain = V'.A -Chain then 0 else V'.•-Chain e.N NI:'~ N'~ productions (/VP ~) N' attribution: NP~-A-Chain ~- 0 NP2.~-Chain *- 0 N (PP)(C) attribution: PP-A-Chain *-- 0 PP./T-Chain *-- 0 C-A-Chain ~ 0 C.A'-Chain *- 0 Appendix B: Functional determination of NP i. General Rules atCrib ution: NP.pronominal if NP.empty = '-' then N'.pronominal else if NP.Governor = <0,'nil'> then '+' else '-' NP.anaphoric if NP.empty = '-' then N'.anaphoric else if NP. Whs ~- '+' then '-' else if NP.Governor = <0,'nil'> then '+' else if NP. Cases ~ 'nil' then '+' else '-' ii. Productions NP-*~ attribution NP.empty *-- '+' NP --* (Spec) N' attribution NP.empty 4--- '-' 51
1987
7
GETTING IDIOMS INTO A LEXICON BASED PARSERS HEAD Oliviero Stock I.P. - Consiglio Nazionale delle Ricerche Via dei Monti Tiburtini 509 00157 Roma, Italy ABSTRACT An account is given of flexible idiom processing within a lexicon based parser. The view is a compositional one. The parser's behaviour is basically the "literal" one, unless a certain threshold is crossed by the weight of a particular idiom. A new process will then be added. The parser, besides yielding all idiomatic and literal interpretations embodies some claims of human processing simulation. 1. Motivation and comparison with other approaches Idioms are a pervasive phenomenon in natural languages. For instance, the first page of this paper (even if written by a non-native speaker) includes no less than halfdozen of them. Linguists have proposed different accounts for idioms, which are derived from two basic points of view: one point of view considers idioms as the basic units of language, with holistic characteristics, perhaps including wordsasa particular case; the other point of view emphasizes instead the fact that idioms are made up of normal parts of speech, that play a precise role in the complete idiom. An explicit statement within this approach is the Principle of Decompositionality (Wasow, Sag and Nunberg 1982): "When an expression admits analysis as morphologically or syntactically complex, assume as an operating hypothesis that the sense of the expression arises from the composition of the senses of its constituent parts". The syntactic consequence is that idioms are not a different thing from "normal" forms. Our view is of the latter kind. We are aware of the fact that the flexibility of an idiom, depends on how recognizable its metaphorical origin is. Within flexible word order languages the flexibility of idioms seems to be even more closely linked to the strengths of particular syntactic constructions. Let us now briefly discuss some computational approaches to idiom understanding. Applied computational systems must necessarily have a capacity for analyzing idioms. In some systems there is a preprocessor delegated to the recognition of idiomatic forms. This preprocessor replaces the group of words that make for one idiom with the word or words that convey the meaning involved. In ATN systems instead, specially if oriented towards a particular domain, sometimes there are sequences of particular arcs inserted in the network, which, if transited, lead to the recognition of a particular idiom (e.g. PLANES, Waltz 1978). LIFER (Hendrix 1977), one of the most successful applied systems, was based on a semantic grammar, and within this mechanism idiom recognition was easy to implement, without considering flexibility. Of course, in all these systems there is no intention to give an account of human processing. PHRAN (Wilensky and Arens 1980) is a system based entirely on pattern recognition. Idiom recognition, following Fillmore's view (Fillmore 1979) is considered the basic resource all the way down to replace the concept of grammar based parsing. PHRAN is based on a data base of patterns (including single words, at the same level), and proceeds deterministically, applying the two principles "when in doubt choose the more specific pattern'* and "choose the longest pattern'. The limits of this approach lie in the capacity of generating various alternative interpretations in case of ambiguity and in running the risk of having an eccessive spread of nonterminal symbols if the data base of idioms is large. A recent work on idioms with a similar perspective is Dyer and Zernik (1986). The approach we have followed is different. The goals we had with our work must be stated explicitly: I) to yield a cognitive model of idiom processing; 2) to integrate 52 idioms in our lexical date, just as further information concerning words (as in a traditional dictionary) 3) to insert all this in the framework of WEDNESDAY 2 (Stock 1986), a nondeterministic lexicon based parser. To anticipate the cognitive solution we are discussing here: idiom understanding is based on normal syntactic analysis with word driven recognition in the background. When a certain threshold is crossed by the weight of a particular idiom, the latter starts a process of its own, that may eventually lead to a complete interpretation. Some of the questions we have dealt with are: how are idioms to be specified? b) when are they recognized? c) what happens when they are recognized? d) what happensafterwards? 2. A summary of WEDNESDAY 2 WEDNESDAY 2 (Stock 1986) is a parser based on linguistic knowledge distributed fundamentally through the lexicon. The general viewpoint of the linguistic representation is not far from LFG (Kaplan & Bresnan 1982), although independently conceived. A word interpretation includes: - a semantic representation of the Word, in the form of a semantic net shred; - static syntactic information, including the category, features, indication of linguistic functions that are bound to particular nodes in the net. One particular specification is the Main node, the head of the syntactic constituent the word occurs in; - dynamic syntactic information, including impulses to connect pieces of semantic information, guided by syntactic constraints. Impulses look for "fillers" on a given search space. They have alternatives, (for instance the word tell has an impulse to merge its object node with the Main node of either an NP or a subordinate clause). An alternative includes: a contextual condition of applicability, a category, features, marking, side effects (through which, for example, coreference between subject of a subordinate clause and a function of the main clause can be indicated). Impulses may also be directed to a different search space than the normal one with a mechanism that can deal with long distance dependencies; - measures of likelihood. These are measures that are used in order to derive an overall measure of likelihood of a partial analysis. Measures are included for the likelihood of that particular reading of the word and for aspects attached to an impulse: a) for one particular alternative b) for the relative position the filler c) for the overall necessity offinding a ffiler. - a characterization of idioms involving that word (see next paragraph). The only other data that the parser uses are in the form of simple (non augmented) transition networks that only provide restrictions on search spaces where impulses can look for fillers. In more traditional words these networks deal with the distribution of constituents. A distinguished symbol, SEXP, indicates that only the occurrence of something expected by preceding words (i.e. for which an impulse was set up) will allow the transition. It is stressed that inside a constituent the position of elements can be free. In WEDNESDAY 2 one can specify in a natural and nonredundant way, all the graduality from obligatory positions, to obligatory precedences to simple likelihoods of relative positions. The parser is based on an extension of the idea of chart parsing [Kay 1980, Kaplan 1973] [see Stock 1986]. What is relevant here is the fact that "edges" correspond to search spaces. They are complex data structures provided with a rich amount of information including a semantic interpretation of the fragment, syntactic data, pending impulses, an overall measure of likelihood etc. Data on an edge are "unified" dynamically. Parsing goes basically bottom-up with top-down confirmation, improving the so called Left Corner technique. When a lexical edge with category C is added to the chart, its First Left Cross References F(C) are fetched. First Left Cross References are defined recursively: for every lexical category C, the set of initial states that allow for transitions on C, or the set of initial states (without repetitions) that allow for transitions on symbols in F(C). So, for instance, F(Det) -- {NP,S~, at least. For each element in F(C) an edge of a special kind is added to the chart. These special edges are called sleeping edges. A sleeping edge at a vertex V~ is awakened, i.e. causes the introduction of a normal active edge iffthere is an active edge arriving at Vs that may be extended with an edge with the category of S. If they are not awakened, sleeping edges play no role at all in the process. An agenda is provided which includes tasks ofseveral different types, including ~xical tasks, extension tasks, insertion tasks and virtual tasks. A lexical task specifies 53 a possible reading era word to be introduced in the chart as an inactive edge. An extension task specifies an active edge and an inactive edge that can extend it (together with some more information). An insertion task specifies a nondeterministic unification operation. A virtual task consists in extending an active edge with an edge displaced to another point of the sentence, according to the mechanism that treats long distance dependencies. At each stage the next task chosen for execution is the value of a scheduling-selecting function. The parser works asymmetrically with respects to the "arrival" of the Main node: before the Main node arrives, an extension of an edge causes almost nothing. On the arrival of the Main, all the candidate fillers must find a compatible impulse end all impulses concerning the main node must find satisfaction, flail this does not happen then the new edge supposedly to be added to the chart is not added: the situation is recognized as a failure. After the arrival of the Main, each new head must find an impulse to merge with , and each incoming impulse must find satisfaction. Again, if all this does not happen, the new edge will not be added to the chart. Dynamically, apart from the general behaviour of the parser, there are some particular restrictions for its nondeterministic behaviour, that put into effect syntax- based dynamic disambiguation. 1) the SEXP arc allows for a transition only if the configuration in the active edge includes an impulse to link with the Main of the proposed inactive edge. 2) The sleeping edge mechanism prevents edges not compatible with the left context from being established. 3) A search space can be closed only if no impulse that was specified as having to be satisfied remains. In other words, if in a state with an outgoing EXIT arc, an active edge can cause the establishing of an inactive edge only if there are no obligatory impulses left. 4) A proposed new edge A' with a verb tense not matching the expected values causes a failure, i.e. that A' will not be introduced in the chart. 5) Failure is caused by inadequate mergings, with relation to the presence, absence or ongoing introduction of the Main node. Comparing to the criteria established for LFG for functional compatibility of an f-structure [Kaplan & Bresnan 1982], the following can be said of the dynamics outlined here. Incompleteness recognition performs as specified in 3). and furthermore there is an earlier check when the Main arrives, in case there were obligatory impulses to be satisfied at that point (e.g. an argument that must occur before the Main). Incoherence is completely avoided after the Main has arrived, by the $EXP arc mechanism; before this point, it is recognized as specified in 5) above, and causes an immediate failure. Inconsistency is detected as indicated in 4) and 5). As far as 5) is concerned, though, the attitude is to "activate" impulses when the right premises are present and to "look for the right thing" and not to "check if what was done is consistent". Note that a morphological analyzer, WED-MORPH, linked to WEDNESDAY 2, plays a substantial role, specially if the language is Italian. In Italian you may find words like rifacendogliene, that stands for while making some (of them) for him again. The morphological analyzer not only recognizes complex forms, but must be able to put together complex constraints originated in part by the stem and in part by the affixes. The same holds for the semantic representation and will have consequences in our dealing with idioms. Fig. I shows a diagram of WEDNESDAY 2 sentence unHi¢al,on F i ..... ."o°o0+"'1 I " I I i/ procussor I i l Fig. 1 3. Specification of idioms in the lexicon Idioms are introduced in the lexicon as further specifications of words, just as in a normal dictionary. They may be of two types: a) canned phrases, that just behave as several-word entries in the lexicon (there is nothing particularly interesting in that, so we shall not go into detail here); b) flexible idioms; these idioms are 54 described in the lexicon bound to the particular word representing the "thread" of that idiom; in WEDNESDAY 2 terms, this is the word that bears the Main of the immediate constituent including the idiom. Thus, Lfwe have an idiom like to build castles in the air, it will be described along with the verb, to build. After the normal word specifications, the word may include a list of idiomatic entries. Fig.2 shows a BNF specification of idioms in the lexicon. The symbol + stands for "at least one occurrence of what precedes"). Each idiom is described in two sections: the first one describes the elements that characterize that idiom, expressed coherently with the normal characterization of the word, the second one describes the interpretation, i.e. which substitutions should be performed when the idiom is recognized. Let us briefly describe Fig. 2. The lexicalform indicates whether passivization (that in our theory, like in LFG, is treated in the lexicon) is admitted in the idiomatic reading. The idiom.stats, describing configurations of the components of an idiom, are based on the basic impulses included in the word. In other words constituents of an idiom are described as particular fillers of linguistic functions or particular modifiers. For example build castles in the air, when build is in an active form, has castles as a further description of the filler of the OBJ function and the string in the air as a further specification of a particular modifier that may be attached to the Main node. MORESPECIFIC, the further specification of an impulse to set a filler for a function includes: a reference to one of the possible alternative types of idlers specified in the normal impulse, a specification that describes the fragment that is to play this particular role in the idiom, and the weight that this component has in the overall recognition of the idiom. IDMODIFIER is a specification of a modifier, including the description of the fragment and the weight of this component. CHANGEIMPULSE and REMOVEIMPUI~E consent an alteration of the normal syntactic behaviour. The former specifies a new alternative for a filler for an existing function, including the description of the component and its weight (for instance the new alternative may be a partial NP instead of a complete NP (as in take care), or a NP marked differently from usual). The latter specifies that a certain impulse, specified for the word, is to be considered to have been removed for this idiom description. There are a number of possible fragment specifications, including string patterns, semantic patterns, morphological variations, coreferences etc. Substitutions include the semantics of the idiom, which are supposed to take the place of the literal semantics, plus the specfication of the new Main and of the bindings for the functions. New bindings may be included to specify new semantic linkings not present in the literal meaning (e.g. take care of ~:someone~, if the meaning is to attend to <:someone,, then <:somcone ~ must become an argument of attend). < idioms > :: ffi (IDIOMS < idiomentry > + ) <idiomentry > :: ffi ( < lexicalform > < idiom-stat > + SUBSTITUTIONS < idiomsubst > + ) < lexical£orm > :: = T/(NOT-PASSIVE) <idiom-star >:: ffi (MORESPECIFIC < lingfunc > <alternnum > < fragmentspec > <weight>)/ (CHANGEIMPULSE < lingfunc > <alternative> + <fragmentspec> <weight>)/ (IDMODIFIER <fragmentspec> <weight>)/ (REMOVEIMPULSE <lingfunc >) <alternative >:: =(<test> < fillertype > <beforelh > <features> <mark> <sideffect > < fragmentspec >) < fragmentspec > :: --- (WORD < word >)/(FIXWORDS < wordseq >)/(FIRSTWORDS < wordseq >)/ (MORPHWORD < wordroot > )/(SEM (< concept > + ) < prep >)/(EQSUBJ) <idiomsubst > :: ffi (SEM-UNITS < sem-unit > + )/(MAIN < node >)/ (BINDINGS(< lingfunc > < node >) + )/ {NEWBINDINGS( < node > < lingfunc path >) + ) Fig. 2 55 4.. Idiom processing Idiom processing works in WEDNESDAY 2 integrated in the nondeterministic, multiprocessing- based behaviour of the parser. As the normal (literal) analysis proceeds and partial representations are built, impulses are monitored in the background, checking for possible idiomatic fragments. Monitoring is carried on only for fragments of idioms not in contrast with the present configuration. A dynamic activation table is introduced with the occurrence of a word that has some idiom specification associated. Occurrence of an expected fragment of an idiom in the table raises the level of activation of that idiom, in proportion to the relative weight of the fragment. If the configuration of the sentence contrasts with one fragment then the relative idiom is discarded from the table. So all the normal processing goes on, including the possible nondeterministic choices, the establishing of new processes etc. The activation tables are included in the edges of the chart. When the activation level of a particular idiom crosses a fixed threshold, a new process is introduced, dedicated to that particular idiom. In that process, only that, idiomatic interpretation is considered. Thus, in the first place, an edge is introduced, in which substitutions are carried on; the process will proceed with the idiomatic representation. Note that the process begins at that precise point, with all the previous literal analysis acquired to the idiomatic analysis. The original process goes on as well (unless the fragment that caused the new process is non syntactic and only peculiar to that idiom); only, the idiom is removed from the active idiom table. At this point there are two working processes and it is a matter of the (external) scheduling function to decide priorities. What is relevant is: a) still, the idiomatic process may result in a failure: further analysis may not confirm what has been hypothesized as an idiom; b) a different idiomatic process may be parted from the literal process at a later stage, when its own activation level crosses the threshold. Altogether, this yields all the analyses, literal and idiomatic, with likelihoods for the different interpretations In addition, it seems a reasonable model of how humans process idioms. Some psycholinguistic experiments have supported this view (Cacciari & Stock, in preparation) which is also compatible with the model presented by Swinney and Cutler (1978). Here we have disregarded the situation in which a possible idiomatic form occurs and its role in disambiguating. The whole parsing mechanism in WEDNESDAY 2 is based on dynamic unification, i.e. at every step in the parsing process a partial interpretation is provided; dynamic choices are performed scheduling the agenda on the base of the relation between partial interpretations and the context. 5. An example As an example let us consider the Italian idiom prendere // toro per /e corn~ (literally: to take the bull by the horns; idiomatically: to confront a difficult situation). The verb prendere (to take) in the lexicon includes some descriptions of idioms. Fig. 3 shows the representation of prendere in the lexicon. The stem representation will be unified with other information and constraints coming from the affixes involved in a particular form of the verb. The fwst portion of the representation is devoted to the literal interpretation of the word, and includes the semantic representation, the l/kelihood of that reading, and fimctional information, included the specification of impulses for unification. The numbers are likelihoods of the presence of an argument or of a relative position of an argument. The (sere-traits (nl(p-take n2 n3))) (likeliradix 0.8) (ma/n nl) (lingfunctions (subj n2Xobj n3)) (cat v) (un/(subj) (must 0.7) ((t np 0.9 nil nora))) (uni (obj) (must) ((t np 0.3 nil acc))) (idioms ((t (morespocific (obj) 1 (fixwords il taro) 8) (idmodifier (fixwords per le coma) 10) substitutions (sere-units (ml(p-confront m2 m3)) (m4 (p-situation m3)) (m5 (p-difficult m3))) (main ml) (bindings (subj m2))] Fig. 3 56 second portion, after "idioms" includes the idioms involving "prendere". In Fig. 3 only one such idiom is specified. It is indicated that the idiom can also occur in a passive form and the specification of the expected fragments is given. The nmnbers here are the weights of the fragments (the threshold is fixed to 10). The substitutions include the new semantic representation, with the specification el" the main ,rode and of the binding of the subject. Note that the surface functional representation will not be destroyed after the substitutions, only the semantic (logical} representation will be recomputed, imposing its own bindings. As mentioned, Italian allows great flexibility. Let the input sentence be rinformatieo prese per le corna la capra (literally: the computer scientist took by the horns the goat}. When prese (took) is analyzed its idiom activation table is inserted. When the modifier per le corna (by the horns) shows up, the activation of the idiom referred to above crosses the threshold (the sum of the two weights goes up to 12). A new process starts at this point, with the new interpretation unified with the previous interpretation of the Subject. Also, semantic specifications coming from the suffixes are reused in the new partial interpretation. The process just departs from the literal process, no backtracking is performed. At this point we have two processes going on: an idiomatic process, where the interpretation is already the computer scientist is confronting a difficult situation and a literal process, where, in the background, still other active idioms monitor the events. In fig. 4 the two semantic representations, in the form of semantic networks, are shown. When the last NP, la capra (the goat), is recognized, the idiq)matic proce.,~ fails(it nee(led the hull as ()bjcct). The literal pr,cess yichls its analysis, but. also. another idiom crosses the threshold, starts its process with the substitutions and immediately concludes positively. This latter. unlikely, idiomatic interpretation means the computer scientist confused the goat and the horns. 6. Implementation WEDNESDAY 2 is implemented in lnterlisp-D and runs on a Xerox 1186. The idiom recognition ability was easily integrated into the system. The performance is very satisfying, in particular with regard to the flexibility present in Italian. Around the parser a rich environment has been built. Besides allowing easy editing and graphic inspecting of resulting structures, it allows interaction with the agenda and exploration of heuristics in order to drive the multiprocessing mechanism of WEDNESDAY 2. Cl'fl0~ C~I ;C3 C10113~ ~,~113~ C31"f3fq C41140 a) /,.. /1 ~ ~ \ t / * / \z i~" 111 / " \ ~ | \z I - ' - / I " -- 11a~p ~.t~4 P-BY C1110¥ ..... ,lld~ ~ p.TQ-TNK.F ;(11~06 ~O'& b) Fig. 4 57 This environment constitutes a basic resource for exploring cognitive aspects, complementary to laboratory experiments with humans. At present we are also working on an implementation of a generator that includes the ability to produce idioms, based on the same data structure and principles as the parser. Acknowledgements Thanks to Cristina Cacciari for many discussions and to Federico Cecconi for his continuous help. Wasow, T., Sag, I., Nunberg, G. Idioms: an interim report. Preprints of the International Congress of Linguistics, 87-96, Tokyo (1982) Wllensky, R. &Arens, Y. PHRAN. A Knowledge Based Approach to Natural Language Analysis. University of California at Berkeley, ERL Memorandum No. UCB/ERL M80/34 (1980). References Dyer, M. & Zernik, U. Encoding and Acquiring Meaning for Figurative Phrases. In Proceedings of the 24th Meeting of the Association for Computational Linguistics. New York (1986) Fillmore, C. Innocence: a Second Idealization for Linguistics. In Proceedings of th~ Fifth Annual Meeting of the Berkeley Linguistics Society. University of California at Berkeley, 63-76 (1979). Hendrix, G.G. LIFEP~ a Natural Language Interface Facility. SlGARTNewsletter Vol. 61 (1977). Kaplan, R. A general syntactic processor. In Rnstin, R. (Ed.), Natural Language Processing. Englewood Cliffs, N.J.: Prentice-Hall (1973) Kaplan,R. & Bresnan~I. Lexical-Functional Grammar: a formal system for grammatical representation. In Bresnan,J., Ed. The Mental Representation of Grammatical Relations. The MIT Press, Cambridge, 173-281(1982) Kay, M. Algorithm Schemata and Data Structures in Syntactic Processing. Report CSL-80-12, Xerox, Pale Alto Research Center, Pale Alto (1980) Stock, O. Dynamic Unification in Lexically Based Parsing. In Proceedings of the Seventh European Conference on Artificial Intelligence. Brighton, 212-221 (1986) Swinney, D~A., & Cutler, A. The Access and Processing of Idiomatic Expressions. Journal of Verbal Learning and Verbal Beh~viour, 18, 523-534(1978) Waltz, D. An English Language Question Answering System for a Large Relational Database. Communications of the of the Association for Computing Machinery, Vol. 21, N. 7 (1978). 58
1987
8
Phrasal Analysis of Long Noun Sequences Yigal Arens, John J. Granacki, and Alice C. Parker University of Southern California Los Angeles, CA 90089-0782 ABSTRACT Noun phrases consisting of a sequence of nouns (sometimes referred to as nominal compounds) pose considerable difficulty for language analyzers but are common in many technical domains. The problems are compounded when some of the nouns in the sequence are ambiguously also verbs. The phrasal approach to language analysis, as imple- mented in PHRAN (PHRasal ANalyzer), has been extended to handle the recognition and partial analysis of such constructions. The phrasal analysis of a noun sequence is performed to an extent sufficient for continued analysis of the sen- tence in which it appears. PHRAN is currently being used as part of the SPAN (SPecification ANalysis) natural language interface to the USC Advanced Design AutoMation system (ADAM) (Granacki ct at, 1985). PHRA_N-SPAN is an inter- face for entering and interpreting digital system specifications, in which long noun sequences occur often. The extensions to PHRAN's knowledge base to recognize these constructs are described, along with the algorithm used to detect and resolve ambiguities which arise in the noun sequences. 1. Introduction In everyday language we routinely encounter noun phrases consisting of an article and a head noun, possibly modified by one or more adjectives. Noun-noun pairs, e.g., park bench, atom bomb, and computer programmer, are also common. It is rare, however, to encounter noun phrases consisting of three or more nouns in sequence. Consequently, research in natural language analysis has not con- centrated on parsing such constructions. The situation in many technical fields is quite different. For example, when describing the specifications of electronic systems, designers com- monly use expressions such as: bus request cycle transfer block size segment trap request interrupt vector transfer phase arithmetic register transfer instruction. During design specification such phrases are often constructed by the specifier in order to refer- ence a particular entity: a piece of hardware, an activity, or a range of time. In most cases, the nouns preceding the last one are used as modifiem, and idiomatic expressions are very rare. In almost all cases the meaning of noun sequences can there- fore be inferred largely based on the last noun in the sequence*. (But see Finin (1980) for in-depth treatment of the meaning of such constructions). The process of recognizing the presence of these expressions is, however, complicated by the fact that many of the words used are syntactically ambiguous. Almost every single word used in the examples above belongs to both the syntactic categories of noun and verb. As a result, bus request cycle may conceivably be understood either as a corn- * When a sequence has length three or more the order of modification may vary. Consider: lengine damage] report January [aircraft repairs I [boron epoxyl [ [rocket motor] chambers l 1970 I [balloon flight I [ [solar-cell standardization l program] ]. But the last noun is still the modified one. These examples are from (Rhyne, 1976) and (Marcus, 1979). 59 mand (to bus the request cycle) or as a noun phrase. Considerable knowledge of the semantics of the domain is necessary to decide the correct interpretation of a nominal compound and the natural language analyzer must ultimately have access to it. But before complete semantic interpretation of such a noun phrase can even be attempted the analyzer must have a method of recognizing its presence in a sentence and determin- ing its boundaries. I.i. The Rest of this Paper The rest of this paper is structured as fol- lows: In the next section, Section 2., we describe the phrasal analysis approach used by our system to process input sentences. In Section 3. we discuss the problems involved in the recognition of long noun sequences, and in Section 4. we present our proposed solution and describe its implementation. Sections 5. and 6. are devoted to related work and to our conclusions, respectively. 2. The PHRA_N-SPAN System PHRAN, a PHRasal ANalysis program, (.A.rens, 1986) (Wilensky and Arens, 1980), is an implementation of a knowledge-based approach to natural language understanding. The knowledge PHRAN has of the language is stored in the form of pattern-concept pairs (PCPs). The linguistic component of a pattern-concept pair is called a phrasal pattern and describes an utterance at one of various different levels of abstraction. It may be a single word, or a literal string like Digital Equipment Corporation, or it may be a general phrase such as (1) <~component> <~send> <data> to < component > which allows any object belonging to the semantic category component to appear as the first and last constituents, anything in the semantic category data as the third constituent, any form of the verb 8end as the second, while the lexical item to must appear as the fourth constituent. Associated with each phrasal pattern is a conceptual template, which describes the meaning of the phrasal ~pattern, usually with references to the constituents of the associated phrase. Each PCP encodes a single piece of knowledge about the language the database is describing. For the purpose of describing design specifications and requirements a declarative representation language was devised, called SRL (Specification and Requirements Language). In SRL the conceptual template associated with phrasal pattern (1) above is a form of unidirec- tional value transfer. In this specific case it denotes the transfer of the data described by the third con- stituent of the pattern by the controlling agent described by the first constituent to the component described by the fifth. For further details of the representation language used see (Granacki et al, 1987). PHRA_N analyzes input by searching for phrasal patterns that match fragments of it and replacing such fragments with the conceptual tem- plate associated with the pattern. The result of matching a pattern may in turn be present as a constituent in a larger pattern. Finally, the con- ceptual template associated with a pattern that accounts for all the input is used to generate a structure denoting the meaning of the complete utterance. A slightly more involved version of the PCP discussed above is used by PHRAN-SPAN to analyze the sentence: The cpu tranofer8 the code word from the controller to the peripheral device. 3. The Problem wlth Long Noun Sequences Long noun sequences pose considerable difficulty to a natural language analyzer. The problems will be described and treated in this sec- tion in terms of phrasal analysis, but they are not artifacts of this approach. A comparison with other approaches to such constructs, mentioned later in this paper, also makes this clear. The main difficulties with multiple noun sequences are: • Determination of their length. One must make sure that the first few nouns are not taken to constitute the first noun phrase, ignoring the words that follow. For example, upon reading bu~ request cycle we do not 60 want the analyzer to conclude that the first noun phrase is simply bus, or bus request. • Interpretation of ambiguous noun/verbs. A large portion of the vocabulary used in digi- tal system specification consists of words which are both nouns and verbs. Conse- quently the phrase interrupt vector transfer phase, for example, might be interpreted as a command to interrupt the vector transfer phase, or (unless we are careful about number agreement) as the claim that phase is transferred by interrupt vectors. In spoken language stress is sometimes used to "adjective-ize" nouns used as modifiers. For example, the spoken form would be "arithmetic register transfer" rather than "arithmetic register transfer". Obviously, such a device is not available in our case, where specifications are typed. • Determination of enough about their mean- ing to permit further analysis of the input. Full understanding of such expressions requires more domain knowledge than one would wish to employ at this point in the analysis process (Cf. Finin (1980)). However, at least a minimal understanding of the semantics of the noun phrase is necessary for testing selectional restrictions of higher level phrasal patterns. This is required, in turn, in order to provide a correct representation of the meaning of the complete input. The phrasal approach utilizes the phrasal pattern as the primary means of recognizing expressions, and in particular noun sequences. In effect, a phrasal pattern is a sequence of restrictions that constituents must satisfy in order to match the pattern. The most common restrictions on a constituent in a PHRAN phrasal pattern, and the ones relevant in our case, are of the following three types: 1. The constituent must be a particular word; 2. It must belong to a particular semantic category; or, 3. It must belong to a particular syntactic category. In addition, simple lookahead restrictions may be attached to any constituent of the pattern. In the original version of PHRAN such restrictions were limited to demanding that the following word be of a certain syntactic category. Simple phrasal patterns are clearly not capa- ble of solving the problem of recognizing multiple noun sequences. It is not possible to anticipate all such sequences and specify them literally, word for word, since they are often generated on the fly by the system specifier. For a similar reason phrasal patterns describ- ing the sequence of semantic categories that the nouns belong to are, as a rule, inadequate. Finally, from the syntactic point of view all these constructions are just sequences of nouns. A pattern simply specifying such a sequence provides little of the information needed to decide which expression is present and what it might refer to. 4. A Heurlstlc Solution PHRAN's inherent priority scheme was used to solve part of the problem. If a word can be Used either as a noun or a verb, it is recognized first as a noun, all other things being equal. This simple approach was modified to be subject to the following rules: 1. If the current word is a noun, and the next word may be either a noun or a verb, test it for number agreement (as a verb). If the test is unsuccessful do not end the noun phrase. 2. If the current word is a noun, and the next word may be either a noun or a verb, test if the current word* is a possible active agent with respect to the next (as a verb). If not, do not end the noun phrase. 3. If the current word is a noun, and the next word may be either a noun or a verb, check the word after the next one. If it is (unambi- guously) a verb, end the noun phrase with the next word. If it is (unambiguously) a noun, do not end the noun phrase. If the second word away may be either a noun or a verb, treat the utterance as potentially ambi- guous, with a noun phrase ending either at the current word or with the next word. Once a complete noun phrase is detected a new token is created to represent its referent. * The current word may be the last in a sequence of nouns; we are again assuming that its meaning can be used to approximate the meaning of the noun sequence. 61 While all nouns used in its construction are noted, it inherits the semantics of the last noun in the sequence. This information may be used in later stages of the analysis. Other programs which receive the analyzer's output will inspect the representation of the noun phrase again later to determine its meaning more precisely. The heuristic described above has been found to be sufficient to deal with all inputs our system has received up until now. It detects as ambiguous a sentence such as the following: The cpu signal interrupts transfer activity. When looking at the word cpu PHRAN-SPAN finds that Rule 1. can be used. Since number agreement is absent between cpn and signal (used as a verb), the noun phrase cannot be considered complete yet. When the word signal is processed, the system notes that interrupts may be either a (plural) noun or a verb. Number agreement is found, and it is also the case that a signal may act as an agent in an action of interruption, so rules 1. and 2. provide no information. Using Rule 3. we find that the following word, transfer is an ambi- gnous noun/verb. Thus the result of the analysis to this point is indicated as ambiguous, possibly a. [the cpu signal] [interrupts] [transfer activity], or b. [the cpu signal interrupts] [transfer] [activity]. The type of ambiguity detected by Rule 3. can often be eliminated by instructing the users of the specification system to use modals when possi- ble. In case of the example above, to force one of the two readings for the sentence, a user might type the cpu signal will interrupt transfer activity, or the cpu signal interrupts will transfer activity, as appropriate. 4.1. Requesting User Assistance When Rule 3. detects an ambiguity, the sys- tem presents both alternatives to the user and asks for an indication of the intended one. PCPs encode in their phrasal pattern descrip- tions, among other things, selectional restrictions that at times allow the system to rule out some of the ambiguities detected by Rule 3. For example, it is conceivable that interrupts might not be acceptable as agents in a transfer. PHRAN-SPAN would thus be capable of eventually ruling out analysis b. above on its own. However, more often than not it is the case that both interpretations provided by Rule 3. are sensible. We decided that the risk of a wrong specification being produced required that in cases of potential ambiguity the system request immedi- ate aid from the user. Therefore, when sentences like the one in the example above are typed and processed, PHRAN-SPAN will present both possi- ble readings to the user and request that the intended one be pointed out before analysis proceeds. 4.2. Rule Implementation The rules described above are implemented in several pattern-concept pairs and are incorporated into the standard PHRAN knowledge base of PCPs. For example, one of the PCPs used to detect the situation described in Rule 1. while tak- ing into consideration Rule 3. is (in simplified form): Pattern: {<article> <sing-noun & next NfV & next non-sing & after-next verb >} Concept {part of speech: noun phrase semantics: inherit from (second noun) modifiers: (first noun)} 4.3. Current Status The system currently processes specifications associated with all primitive concepts of the specification language, which are sufficient to describe behavior in the domain of digital systems. Pattern-concept pairs have been written for 25 basic verbs common in specifications and for over 100 nouns. This is in addition to several hundred PCPs supplied with the original PHRAN system. The system is coded in Franz LISP and runs on SUN/2 under UNIX 4.2 BSD. In interpreted mode a typical specification sentence will take 20 cpu seconds to process. No attempt has been made to optimize the code, compile it, or port it to a LISP processor. Any of these should result in an 62 interface which could operate in near real-time. 5. Related Work The problem of noun sequences of the kind common in technical fields like digital system specification has received only limited treatment in the literature. Winograd (Winograd, 1972) presents a more general discussion of Noun Groups, but the type of utterances his system expects does not include extended sequences of nouns as are common in our domain. Winograd therefore does not address the specific ambiguity problems raised here. Gershman's Noun Group Parser (NGP) (Gershman, 1979) dealt, among other things, with multiple noun sequences. While our algorithm is consistent with his, our approach differs from NGP in major respects. NGP contains what amount to several different programs for various types of noun groups, while we treat the information needed to analyze these structures as data. PHRAN embodies a general approach to language analysis that does not require components special- ized to different types of utterances. A clear separation of processing strategies from knowledge about the language has numerous advantages that have been listed elsewhere (Arens, 1986). In addi- tion, our treatment of noun groups as a whole is integrated into PHRAN and not a separate module, as NGP is. In evaluating the two systems, however, one must keep in mind that the choice of domain greatly influences the areas of emphasis and interest in language analysis. NGP is capable of handling several forms of noun groups that we have not attempted to deal with. Marcus (1979) describes a parsing algorithm* for long noun sequences of the type discussed in this paper. It is interesting to note that the lim- ited lookahead added to the original PHRAN for the purpose of noun sequence recognition is con- sistent with Marcus' three-place constituent buffer. The major difference between Marcus' algorithm and ours is that the former requires a semantic component that can judge the relative "goodness" of two possible noun-noun modifier pairs. For * Discovered by Finin (Ig80) to be erroneous in some ca.ses. example, given the expression transfer block Mzc, this component would be responsible for determin- ing whether block size is semantically superior to transfer block. Such a powerful component is not necessary for achieving our present objective - recognizing the presence and boundaries of a noun sequence. Our heuristic does not require it. A complementary but largely orthogonal effort is the complete semantic interpretation of long noun sequences. There have been several attempts to deal with the problem of producing a meaning representation for a given string of nouns. See (Finin, 19~0) and (Reimold, 1976) for extensive work in this area, and also (Brachman, 1978) and (Borgida, 1975). Such work by and large assumes that the noun sequence has already been recognized as such. I.e., it requires the existence of a com- ponent much like the one described in this paper from which to receive a noun sequence for process- ing. 6. Conclusions We have presented a heuristic approach to the understanding of long noun sequences. The heuristics have been incorporated into the PHRasal ANalyzer by adding to its declarative knowledge base of pattern-concept pairs. These additions pro- vide the PHRAN-SPAN system with the capability to translate digital system specifications input in English into correct representations for use by other programs. 7. Acknowledgements We wish to thank the anonymous reviewers of this paper for several helpful comments. This research was supported in part by the National Science Foundation under computer engineering grant #DMC-8310744. John Granacki was partially supported by the Hughes Aircraft Co. 8. Bibliography Arens, Y. CLUSTER: An approach to Conteztual Language Understanding. Ph.D. thesis, University of California at Berkeley, 1986. 63 Borgida, A. T. Topics in the Understanding of English Sentences by Computer. Ph.D. thesis, Department of Computer Science, University of Toronto, 1975. Brachman, R. J. Theoretical Studies in Natural Language Understanding. Report No. 3833, Bolt Beranek and Newman, May 1978. Finis, T.W. The Semantic Interpretation of Com- pound Nominals. Ph.D. thesis, University of Illi- nois at Urbana-Champalgn, 1980. Gershman, A. V. Knowledge-Based ParMng. Ph.D. thesis, Yale University, April 1979. Granacki, J., D. Knapp, and A. Parker. The ADAM Design Automation System: Overview, Planner and Natural Language Interface. In Proceedings of the ggnd ACM/IEEE Design Auto- mation Conference, pp. 727-730. ACM/IEEE, June, 1985. Cranacki, J., A. Parker, and Y. Arens. Under- standing System Specifications Written in Natural Language. In Proceedings of IJCAI-87, the Tenth International Joint Conference on Artificial Intelli- gence. Milan, Italy. July 1987. Marcus, M. P. A Theory of Syntactic Recognition for Natural Language. The MIT Press, Cambridge, Mass. and London, England, 1979. Reimold, P. M. An Integrated System of Percep- tual Strategies: Syntactic and Semantic Interpreta- tion of English Sentences. Ph.D. thesis, Columbia University, 1976. Rhyne, J. R. A Lexical Process Model of Nominal Compounding in English. American Journal of Computational Linguistics, microfiche 33. 1976. Wilensky, R., and Y. Arens. PHRAN: A Knowledge-Based Natural Language Understander. In Proceedings of the 18th Annual Meeting of the Association for Computational Linguistics. Phi- ladelphia, PA. June 1980. Winograd, T. Understanding Natural Language. Academic Press, 1972. 64
1987
9
ADAPTING AN ENGLISH MORPHOLOGICAL ANALYZER FOR FRENCH Roy J. Byrd and Evelyne Tzoukermann IBM Research IBM q~omas J. Watson Research Center Yorktown lleights, New York 10598 ABSTRACT A word-based morphological analyzer and a dic- tionary for recognizing inflected forms of French words have been built by adapting the UDICI" system. We describe the adaptations, emphasiz- ing mechanisms developed to handle French verbs. This work lays the groundwork for doing French derivational morphology and morphology for other languages. 1. Introduction. UDICT is a dictionary system intended to sup- port the lexical needs of computer programs that do natural language processing (NLP). Its t'u-st version was built for English and has been used in several systems needing a variety of informa- tion about English words (Heidorn, et a1.(1982), Sowa(1984), McCord(1986), and Neff and Byrd(1987)). As described in Byrd(1986), UDICT provides a framework for supplying syn- tactic, semantic, phonological, and morphological information about the words it contains. Part of UDICT's apparatus is a morphological analysis subsystem capable of recognizing morphological variants of the words who~ lemma forms are stored in UDICT's dictionary. The English version of this analyzer has been de- scribed in Byrd(1983) and Byrd, et al. (1986) and allows UDICT to recognize inflectionally and derivationally affixed words, compounds, and collocations. The present paper describes an ef- fort to build a French version of UDICT. It briefly discusses the creation of the dictionary data itself and then focuses on issues ,raised in handling French inflectional morphology. 2. The Dictionary. The primary role of the dictionary in an NLP system is to store and retrieve information about words, in order for NLP systems to be effective, their dictionaries must contain a lot of informa- tion about a lot of words. Chodorow, et al.(1985) and Byrd, et al.(1987) discuss techniques for building dictionaries with the required scope by extracting lexical information from machine- readable versions of published dictionaries. Be- sides serving the NLP application, some of the lexicai information supports that part of the dic- tionary's access mechanism which permits recog- nition of morphological variants of the stored words. We have build a UDICT dictionary con- taining such morphological information for French by starting with an existing spelling cor- rection and synonym aid dictionary ~ and by add- ing words and information from the French-English dictionary in Collins(1978). French UDICT contains a data base of over 40,000 lemmata which are stored in a direct access file managed by the Dictionary Access Method (Byrd, et al. (1986)). Each entry in this file has one of the lemmata as its key and contains lexical information about that lemma. Other than the word's part-of-speech, this information is repres- ented as binary features and multi-valued attri- butes. The feature information relevant for inflectional analysis includes the following: (1) features : part -of-speech singular plural mascullne feminine We are grateful to the Advanced Language Development group of Maryland, for aocess to their French lexical materials. Those materials parts-of-speech and paradigm classes. IBM's Application Systems Division in Bethesda, include initial categorizations of French words into invarlant first (second, third) person infinitive partlclple past present future imperfect s Imple past subjunctive indicative condltlonal imperative Some of these features are explicitly stored in UDICT's data base. Other features -- including many of the stored ones -- control morphological processing by being tested and set by rules in ways that will be described in the next section. Stored features and attributes which are not af- fected by (and do not affect) morphological processing are called "morphologically neutral." Morphologically neutral information appears in UDICT's output with its stored values unaltered. Such information could include translations from a transfer dictionary in a machine translation system or selectional restrictions used by an NLP system. For French, no such information is stored now, but in other work (Byrd, et al. (1987)) we have demonstrated the feasibility of transferring some additional lexical information (for example, semantic features such as [+human]) from English UDICT via bilingual dictionaries. It may be useful to point out that, given the ability to store such information about words, one way of building a lexical subsystem would be to exhaustively list and store all inflected words in the language with their associated lexical in- formation. There are at least three good reasons for not doing so. First, even with the availability of efficient storage and retrieval mechanisms, the number of inflected forms is prohibitively large. We estimate that the ratio of the number of French inflected forms to lemmata is around 5 (a little more for verbs, a little less for adjectives and nouns). This ratio would require our 40,000 lemmata to be stored as 200,000 entries, ~nore than we would like. The second reason is that inflected forms sharing the same lemma also share a great deal of other lexical information: namely the morphologically neutral information men- tioned earlier. Redundant storage of that infor- mation in many related inflected forms does not make sense linguistically or computationally. Furthermore, as new words are added to the dic- tionary, it would be an unnecessary complication to generate the inflected forms and duplicate the morphologically neutral information. Storing the information only once with the iemma and al- lowing it to be inherited by derived forms is a more reasonable approach. Third, it is clear that there are many regular processes at work in the formation of inflected forms from their lemmata. Discovering generalizations to capture those reg- ularities and building computational mechanisms to handle them is an interesting task in its own right. We now turn to some of the details of that task. 3. Morphological Processing. 3.1. The mechanism. The UDICT morphological analyzer assumes that words are derived from other words by affixation, following Aronoff(1976) and others. Consequently, UDICVs word grammar contains affix rules which express conditions on the base word and makes assertions about the affixed word. These conditions and assertions are stated in terms of the kinds of lexical information listed in (1). An example of an affix rule is the rule for forming French plural nouns shown in Figure 1. This rule -- which, for example, derives chevaux from cheval -- consists of five parts. First, a boundary marker indicates whether the affix is a prefix or a suffix and whether it is inflectional or deriva- tional. (Byrd(1983) describes further possible distinctions which have so far not been exploited in the French system.) Second, the affix name is an identifier which will be used to describe the morphological structure of the input word. Third, the pattern expres~s string tests and modifications to be performed on the input word. In this case, the string is tested for aux at its right end (since this is a suffix rule), two characters are removed, and the letter / is appended, yielding a potential base word. This base word is looked up via a recursive invocation of the rule applica- tion mechanism which includes an attempt to re- trieve the form from the dictionary of stored lemmata. The fourth part of the rule, the condi- tion, expresses constraints which must be met by the base word. In this case, it mu~ be a mascu- line singular (and not plural) noun. The fifth part of the rule, the assertion, expresses modifications to be made to the features of the base in order to -pn: aux21* (noun 4-masc +sing -plur) (noun +plur -sing) II l I l [ I [ [ assertion [ [ I condition [ [ pattern ("check for 'aux', remove 'ux', add '1', lookup") [ affix name ("plural noun") affix boundary ("inflectional suffix") Figure I. The structure of a UDICT morphological rule. describe the derived word. For this rule, the sin- gular feature is turned off and the plural feature is turned on. Features not mentioned in the as- sertion retain their original values; in effect, the derived word contains inherited morphologically neutral lexical information from the base com- bined with information asserted by the rule. For the input chevaux ("hones"), the rule shown in Figure 1 will produce the following analysis: (2) chevaux: cheval(noun plur masc (structure <<*>N -pn>N)) In other words, ehevaux is derived from cheval. It is a plural noun by assertion. It is masculine by inheritance. Its structure consists of the base noun chevai (represented by "<*>N") together with the inflectional suffix °-pn". In order for rules such as lhese to operate, there is a critical dependance on having reliable and extensive lexical information about words hy- pothesized as bases. This information comes from three sources: the stored dictionary, redun- dancy rules, and other recursively applied affix rules. While the assumption that affixes derive words from other words seems entirely appropriate for English, it at fast seemed less so for French. An initial temptation was to write affix rules which derived inflected words by adding affixes to non- word stems. This was especially true for verbs where the inflected forms are often shortcr than the infinitives used as lemmata, and where some of the verbs -- particularly in the third group -- have very complex paradigms. However, our rules' requirement for testable lexical information on base forms cannot be met by a system in which bases arc not words. The machine- readable sources from which we build UDICT dictionaries do not contain information about non-word stems. It is furthermore difficult to design procedures for eliciting such information from native speakers, since people don't have intuitions about forms that are not words. Con- scqucntly, we have maintained the English model in which only words are stored in UDICT's dic- tionary. UDICT's word grammar includes redundancy rules which allow the expression of further gen- eralizations about the properties of words. In a sense, they represent an extension of the analysis techniques u~d to populate the dictionary and their output could well be stored in the diction- ary. The following example shows two redun- dancy rules in the French word grammar: (3) : 0 (adJ -masc -fem)(adJ +masc) : e0 (adj +masc) (adJ +fem) The first rule has no boundary or affix name and its pattern does nothing to the input word. It expresses the notion that if an adjective is not explicitly marked as either masculine or feminine (the condition), then it should at least be consid- ered masculine (the assertion). The second rule says that any masculine adjective which ends in e is also feminine. Examples are the adjectives absurde, reliable, and vaste which are both mas- culine and feminine. Such rules r~duce the bur- den on dictionary analysis techniques whose job is to dctermine the gcndcrs of adjectives from machine-readable resources. For inflectional affixation, we normally derive the inflcctcd form directly from the lemma. How- evcr, rccursivc rule application plays a role in the dcrivation of feminine and plural forms of nouns, adjectives, and participles -- which will be dis- cussed under "noun and adjective morphology" and in our method for handling stem morphology of the French verbs belonging to the third group, which will be discussed under "verb morphology". 3.2. Noun and adjective morphology. For nouns and adjectives, where inflectional changes to a word's spelling occur only at its rightmost end, the word-based model was simple to maintain. a. -vpres: ent$ (v +inf) (v -Inf +ind +pres +plur +pets3) b. -vsubJ: es$ (v +inf) (v -inf +subj +pres +sing +pers2) c. -vlmpf: ions$ (v +inf) (v -Inf +ind +impf +plur +persl) d. -vpres: e$ (v +Inf) (v -Inf +ind +imp +pres +plur ~persl +pers3) e. -vpres: ons$ (v +inf) (v -inf +ind +imp +pres +plur +pets1) Figure 2. Morphological rules which invoke the spelling rules. As shown in Figure 1, the pattern mechanism supports the needed tests and modifications. For recognition of feminine plurals, we treat the feminine-forming affixes as derivational ones (us- ing an appropriate boundary), so that recursive rule application assures that they always occur ~'mside of" the plural inflectional affix. For ex- ample heureuses is analyzed as the plural of heureuse which itself is the feminine of heureux ("happy'). Similarly, dlues ('chosen or elected') is the plural of ~lue which, in turn, is the feminine of ~lu itself analyzed as the past participle of the verb ~lire ('to vote'). The final section of the paper mentions another justification for treating feminine-forming affixes as derivational. 3.3. Verb morphology. Many French verbs be- longing to the first group (i.e., those whose infinitives end in -er, except for aller) show internal spelling changes when certain inflections are applied. Examples are given in (4) where the inflected forms on the right contain spelling al. terations of the infinitive forms on the left. (&)a. peser - (ils) p~sent b. cdder - (que tu) c~des c. essuyer - (tu) essules d. Jeter - (Je, il) jette e. placer - (nous) plefons These spelling changes are predictable and are not directly dependent on the particular affix that is being applied. Rather, they depend on phonological properties of the affix such as whether it is silent, which vowel it begins with, etc. There are seven such spelling rules whose job is to relate the spelling of the word part ~'mside of" the inflectional affix to its infmitive form. These rules are given informally in (5). (The sample patterns should be interpreted as in Figure 1 and are intended to suggest the strategy used to construct infinitive forms from the inflected form. "C" represents an arbitrary con- sonant, "D" represents t or I, and "=" represents a repeated letter.) (5) spelling rules: tlyer*- change i to y and add er, as in essuies/essuyer ~lcer* - change C to c and add er, as in pla¢ons/placer ge0r* - add r, as in mangeons/manger ~C2eCer* - remove grave accent from stem vowel and add er, as in p~sent/peser ~C2~Cer* - change grave accent to acute on stem vowel and add er, as in cddes/cdder ~CC3~CCer* - like the preceding but with a consonant cluster, as in s~chent/s~cher D=ler* - remove the repeated consonant and add er, as in jette/jeter It would be inappropriate and uneconomical to treat these spcUing rules within the affix rules themselves. If we did so, the same "fact" would be repeated as many times as there were rules to which it applied. Rather, we handle these seven spelling rules with special logic which not only encodes the rules but also captures sequential constraints on their application: if one of them applies for a #oven affix, then none of the others will apply. The spelling rules are invoked from the affix rules by placing a "$" rather than a "*" in the pattern to denote a recursive lookup. In effect, the base form is looked up modulo the set of possible spelling changes. Example affix rules largely responsible for (and corresponding to) the forms shown in (4) are #oven in Figure 2. Verbs of the third group are highly irregular. Traditional French grammar books usually assign each verb anywhere from one to six stem forms. Some examples are #oven in (6). (6) stems for third group verbs: a. partir has sterns par-, part- a. -vcond: rlons5* (v +stem -inf) (v +cond +pres +plur +persl) b. +vstem: saulvo£r* (v +inf -stem) (v +stem -£nf) c. saurlons: savolr(verb cond pres plur persl (structure <<*>V -vcond>V)) Figure 3. An example of stem morphology. b. savoir has stems sai-, say-, sau-, sach-, $. c . apercevoir, concevoir, ddcevoir, percevoir, recevoir have stems in -~o/-, -cev-, -~:o/v- d. contredire, dddire, dire, interdire, mJdire, maudire, prJdire, redire have stems in -dis-, -di-, -d- Since our derivations are to be based on lemmata, we need a way to associate infinitives with ap- propriate stem forms. The mechanism we have chosen is to let a special set of verb stem rules perform that association. Recognition of the inflected form of a third group verb thus becomes a two-step process. In the first step, the outer- most affix is recognized, and its inner part is tested for being a valid stem. In the second step, a verb stem rule attempts to relate the stem pro- posed by the inflectional affix rule to an infmitive in the dictionary. If it succeeds, it marks the proposed stem as a valid one and the entire deri- vation succeeds. Consider, as an example, the rules and system output shown in Figure 3. During the analysis of the input saurions ("(we) would know'), the rule in Figure 3(a) will first recognize and remove the ending -rions, and then ask whether the re- suiting sau meets the condition "(v +stem -Lnf)". Application of the verb stem rule in Figure 3(b) will successfully relate sau to savoir and assert its description to include "(v +stem -inf)", thus meeting the condition of rule (a). The result will be the successful recognition of saurions with the analysis given in Figure 3(c). Note that the structure given does not mention the occurrence of the "+vstem" affix; this is in- tentionai and reflects our belief that the two-level structural analysis -- inflectional affix plus infinitive lemma -- is the appropriate output for all verbs. The intermediate stem level, while im- portant for our processing, is not shown in the output for verbs of the third group. "l~e French word grammar contains 165 verb stem rules and another 110 affix rules for third group verbs. Given the extent of the idiosyncrasy of these verbs and their finite number (there are only about 350 of them), it is natural to wonder whether we might not do just as well by storing the inflected forms. In addition to the arguments given above (about redundant storage of morphologically neutral lexical information, etc.), we can observe that there are generalizations to be made for which treatment by rule is appropri- ate. The lists of verbs shown in (6c,d) have common stem pattemings. Lexicalization of the derived forms of these words would not allow us to capture these generMiTations or to handle the admittedly rare coinage of new words which fit these patterns. 4. Summary and further work A recoguizer for French inflected words has been built using a modified version of UDICT, which is progranuned in PL/I and runs on IBM mainframe computers. Approximately 400 affix and verb stem rules were required, of which over half are devoted to the analysis of French verbs belonging to the third group. 15 redundancy rules and 7 spelling rules were also written. In addition to many minor changes not mentioned in this paper, the major effort in adapting the formerly English-only UDICT system to French involved handling stem morphology. French UDICT contains a dictionary of over 40,000 lemmata, providing fairly complete initial cover- age of most French texts, and forming a setting in which to add further, morphologically neutral, lexical information as required by various appli- cations. We are testing French UDICT with a corpus of Canadian French containing well over 100,000 word types. (q~e corpus size is close to 100,000,000 tokens.) Initial results show that the recognizer successfully analyzes over 99% of the most frequent 2,000 types in the corpus, after we discard those which are proper names or not French. For a small number of words (fewer than 25), spurious information was added to the correct analysis. Work continues toward elimi- nating those errors. We believe that the resulting machinery will be adequate for building dictionaries for other European languages in which we are interested (Spanish, Italian, and German). In particular, we believe that the spelling rule mechanism will help ha reeoguizing German umlauted forms and that the stem mechanism will serve to handle highly irregular paradigms in all of these lan- guages. Expressing spelling rules in a more symbolic no- tation (rather than as logic in a subroutine in- voked from affix rules) would facilitate the task of the grammar writer when creating morphological analyzers for new languages. For French, the bulk of the work done by spelling rules is on behalf of verbs of the first group. However, some of the spelling changes are also observed in other verbs and in nouns and adiec- rives. Currently those effects are handled by affix rules. We look forward to generalizing the cov- erage of our spelling rules and thereby further simplifying the affix rules. We also plan to expand our word ganunar to handle the more productive parts of French deft. rational morphology. The attachment of deriva- tional affixes to words is constrained by conditions on a much more extensive set of lexi- cal features than the attachment of inflectional affixes. For example, we have observed that feminine-forming suffixes apply only to nouns which denote humans or domestic animals. The idiosyncrasy of this constraint is typical of deri- vational affixes and provides further justification for our earlier decision to treat feminine-forming suffixes as derivational. By discovering and ex- ploiting such regularities within our framework, we expect to cover a large set of derivational af- fixes. References. Aronoff, M. (1976) Word Formation in Genera- tive Grammar, Linguistic Inquiry Monograph 1, MIT Press, Cambridge, Massachusetts. Byrd, R. J. (1983) "Word formation in natural language processing systems," Proceedings of IJCAI- VIII, 704-706. Byrd, R. J. (1986) "Dictionary Systems for Office Practice, ° IBM Research Report RC 11872, T.J. Watson Research Center, Yorktown lleights, New York. Byrd, R. J., G. Neumann, and K. S. B. Andersson (1986a) "DAM - A Dictionary Access Method," IBM Research Report, IBM T.J. Watson Research Center, Yorktown Heights, New York. Byrd, R. J., J. L. Klavans, M. Aronoff, and F. Anshen. (1986b) "Computer Methods for Morphological Analysis," Proceedings of the As- sociation for Computational Linguistics, ! 20-127. Byrd, R. J., N. Calzolari, M. S. Chodomw, J. L. Klavans, M. S. Neff, and O. A. Rizk (1987) "Tools and Methods for Computational Lexicology,'. IBM Research Report RC 12642, IBM T.J. Watson Research Center, Yorktown l leights, New York. (to be published in Compu- tational Linguistics 1987) Chodorow, M. S., R. J. Byrd, and G. E. tleidom (1985) "Extracting semantic hierarchies from a large on-line dictionary," Proceedings of the As- sociation for Computational Linguistics, 299-304. Colfins (1978) Collins Robert French Dictionary: French-English. English-French. Collins Publish- ers, Glasgow. llcidom, G. E., K. Jensen, L. A. Miller, R. J. Byrd, and M. S. Chodorow (1982) "The EPISTLE Text-Critiquing System," tBM Systems Journal 21,305-326. Klavans, J., Nartey, J., Pickover, C. Reich, D., Rosson, M., and Thomas, J. (1984) "WALRUS: lligh-quality text-to-speech research system," Proceedings of IEEE Speech Synthesis and Re- cognition, pp. 19-28. McCord, Michael C. (1986) "Design of a Prolog- Based Machine Translation System', ['roe. Third International Conference on Logic Programming, Springer-Verlag, 350-374. Neff, M. S. and R. J. Byrd (1987) "WordSmith Users Guide: Version 2.0," IBM Research Report RC 13411, IBM TJ. Watson Research Center, Yorktown tteights, New York. Sowa, J. F. (1984) "Interactive Language Imple- mentation System," IBM J. of Research and De- velopment, vol. 28, no. 1, January 1984, pp. 28-38.
1988
1
An Integrated Framework for Semantic and Pragmatic Interpretation 1 Martha E. Pollack Artificial Intelligence Center SRI International 333 Ravenswood Ave Menlo Park, CA 94025, USA Fernando C.N. Pereira Cambridge Computer Science Research Centre SRI International Suite 23, Millers Yard, Mill Lane Cambridge CB2 1RQ, England Abstract We report on a mechanism for semantic and prag- matic interpretation that has been designed to take advantage of the generally compositional na- ture of semantic analysis, without unduly con- straining the order in which pragmatic decisions are made. To achieve this goal, we introduce the idea of a conditional interpretation: one that de- pends upon a set of assumptions about subsequent pragmatic processing. Conditional interpretations are constructed compositionally according to a set of declaratively specified interpretation rules. The mechanism can handle a wide range of pragmatic phenomena and their interactions. 1 Introduction Compositional systems of semantic interpretation, while logically and computationally very attrac- tive [6,20,26], seem unable to cope with the fact that the syntactic form of aa utterance is not the only source of information about its mean- ing. Contextual information--information about the world and about the history of a discourse-- influences not only an utterance's meaning, but even its preferred syntactic analysis [3,5,7,16]. Of course, context also influences the interpretation (or meaning in context) of the utterance, in which, for example, referring expressions have been re- solved. One possible solution is to move to an integrated system of semantic and pragmatic interpretation, defined recursively on syntactic analyses that are 1 This research has been funded by DARPA under Con- tract N00039-84-C-0524, and by a gift from the Systems Development Foundation as part of a coordinated research effort with the Center for the Study of Language and In- formation, Stanford University. We would like to thank David Israel, FLay Perrault, and Stuart Shieber for their helpful discussions regarding this work. neutral about those decisions that depend upon context. In this approach, a least-commitment grammar may be used to produce neutral repre- sentations that can be reconfigured later. Such a grammar might, for example, leave quantifiers in place [30], attach all prepositional phrases low and right [22], and bracket to the right all com- pound nominals. 2 These neutral analyses can then serve as input to a system that produces inter- pretations (and not meanings) in a nearly com- positional manner, in that the interpretation of a phrase a is a function of the interpretations of its syntactic constituents together with its context of utterance. This model of semantic interpretation assumes that contextual information is available whenever it is needed for deciding among alternative inter- pretations. However, this is often not the case: questions about the interpretation of some con- stituent of an utterance might be answerable only when information about the interpretation of syn- tactically distant constituents becomes available. Familiar examples of this can be found, for in- stance, in sentences with quantifier scoping ambi- guities and in sentences that include intrasenten- tial anaphora. The so-called donkey sentences [9] exhibit both these phenomena. These difficulties do not necessitate a complete abandonment of compositionality. To take advan- tage of the generally compositional nature of se- mantic analysis without constraining unduly the order in which pragmatic decisions are made, we assign to phrases conditional interpretations, 2Tllere are reasons to suspect that ultinmtely syntactic analysis should be incorporated into the same stage of pro- cessing as semantic and pragnuLtic analysis; in particular, it is difficult to develop syntactically neutral representations for certain constructions such as conjunction. 3For simplicity, we shall use the term "phrase" to refer both to an entire utterance and to a constituent of an ut- terance, distinguishing betweeal the two only when needed. 75 which represent the dependence of a phrase's inter- pretation on assumptions about subsequent prag- matic processing. Conditional interpretations are built compositionally according to declaratively specified interpretation rules. The interpretation mechanism we discuss here has been implemented in Prolog as part of the Candide system, a multimodal tool for knowledge acquisition. Incorporating both a graphical inter- face and a processor for English discourse, Can- dide allows a user of the Procedural Reasoning System (PITS) [10] to build and maintain proce- dural networks in a natural way. Procedural net- works, an essential part of PRS's knowledge base, encode the information about procedures that is used by PItS for reasoning about and performing tasks in any given domain. The current version of Candide has been used to construct networks for malfunction procedures for NASA's space shuttle. Further details of the Candide system will be pre- sented elsewhere [24]. 2 Conditional Interpretations In our approach to semantic and pragmatic in- terpretation, conditional interpretations separate the context-independent aspects of an interpreta- tion from those that are context-dependent. Each conditional interpretation consists of a sense and a [possibly empty] set of assumptions. As a first approximation, one might think of the sense of a phrase as representing purely semantic informa- tion, that is, information that can be adduced solely from the linguistic content of the phrase, no matter in which context the phrase has been ut- tered. The assumptions then represent constraints relating the phrase's sense to its ultimate interpre- tation. A complete interpretation has an empty assumption set, indicating that all of its depen- dencies on context have been resolved. The present version of the theory allows for two kinds of assumptions. A bind assumption intro- duces a new parameter in an interpretation and places constraints on the binding of the parameter to individuals in the context. A restrict assump- tion does not introduce a new parameter, but in- stead further restricts the way in which an existing parameter can be bound. These concepts are illustrated by the following conditional interpretation of the sentence "The jet failed": [["The jet failed"] = (fail(z), {bind(z, def, jet)}) (:) The first element of the interpretation is the sense fail(z), while the second is the set of assump- tions containing a single assumption whose infor- mal reading is that z should be bound to some- thing of the sort jet according to the constraints of definite reference. 3 The Interpretation Pro- cess The process of semantic and pragmatic interpre- tation computes complete interpretations of sen- tences from least-commitment parse trees. Two types of rules govern the interpretation pro- cess: semantic.interpretation rules and pragmatic- discharge, rules. Semantic-interpretation rules specify the con- ditional interpretation of a phrase in terms of the conditional interpretations of its constituents. Compositionality is enforced by making semantic- interpretation rules sensitive only to the syntac- tic types of a phrase and its constituents, as well as to the types of assumptions in the conditional interpretations associated with the constituents; semantic-interpretation rules are not sensitive to the senses of the constituents. Pragmatic-discharge rules change the condi- tional interpretation of a phrase by specifying how assumptions in the conditional interpretation may be eliminated with respect to the context of utter- ance. For example, one discharge rule applies to assumptions constraining a parameter to be bound as a definite reference. This rule allows an assump- tion of the form bind(v, def, T) to be discharged, provided that there is a unique contextually avail- able entity of sort T. The effect of applying the definite discharge rule to an interpretation (S, A) is twofold: the bind assumption operated upon is removed from the set of assumptions A; the sense S is changed to reflect the binding. For instance, if the rule were applied to the interpretation in (1), and if the context of utterance C contained a unique available entity j of sort jet, the resulting interpretation would be ~fail(j),q~) (2) As we shall see in the next section, assumption dis- charge will in general not only make use of but also change the discourse context. Therefore, discharge rules should be viewed as four-place relations. For example, the following would be an instance of the 76 discharge relation: discharge(C, (fail(z), {bind(z, def, jet)}), (/air(j), ~), C'), where C is the discourse context before the as- sumption is discharged, while C I is the resulting discourse context. Semantic-interpretation rules are obligatory in that some semantic-interpretation rule associated with a given syntactic rule must be applied to any phrase analyzed by the syntactic rule. In contrast, the application of pragmatic-discharge rules is op- tional, although discharging a particular assump- tion too early or too late may lead to a dead end in the interpretation process. Applying the same discharge rule at different points in the interpreta- tion process for some utterance may lead to alter- native interpretations, as we shall illustrate with the examples in Sections 6 and 7. Given a sentence and its syntactic analy- sis, the interpretation process applies semantic- interpretation and pragmatic-discharge rules, ac- cording to their applicability conditions, to con- struct the derivation of a complete interpretation of the sentence. In Candide, this process resem- bles a syntax-directed translation system [1]. In- terpretation starts at the root node of the anal- ysis tree. For each node of the tree, the inter- pretation process selects an appropriate semantic- interpretation rule and calls itself recursively for each of the node's daughters. Interpretations are constructed on return from the recursion, and pragmatic-discharge rules are optionally applied in a discharge cycle that follows each application to a node of a semantic-interpretation rule. Lexical ambiguity, multiple semantic-interpre- tation rules for a given syntactic construction, op- tional application of discharge rules, and alter- native ways of discharging a given assumption. are all sources of nondeterrninism in the interpre- tation process, which need to be somehow con- trolled. In Candide, we adopted four simple con- trol tactics: overall depth-first search, early dis- charge of assumptions, breadth-first search for al- ternative bindings of a discharged parameter, and bounds on assumption percolation wherever it can be shown that an assumption would not be dis- chargeable outside a certain syntactic domain. For lack of space, a fuller discussion of these heuristics will be conducted elsewhere [24]. 4 The Discourse Context Pragmatic-discharge rules need access to a dis- course context that encodes information about rel- evant world knowledge and the discourse history. Although our framework for semantic and prag- matic interpretation can accommodate alternative representations of the discourse context, the spe- cific discharge rules we have written and incorpo- rated into the Candide system rely on a particular representation comprising four parts: immediate con~ez4, local contez~, global contezt, and a knowl- edge base. During the analysis of a sentence, the immediate context contains detailed information about the entities referred to in that sentence; it is used pri- marily for resolving intrasentential anaphora. The local context generally contains detailed informa- tion about the immediately preceding sentence, 4 while the global context includes somewhat less detailed information about entities referred to throughout longer stretches of the discourse. We use the local context primarily for pronoun resolu- tion, following the theory of centering introduced by Grosz et aL[12]. The global context is employed primarily for the resolution of definite anaphora, and is structured as a stack to make use of the theory of focusing. Each element of the global- context stack is itself a list of entries containing information about the entities referred to in a dis- course segment [13]. We refer to the top element of the global context as the intermediate contezt. Individual discharge rules used in processing a sentence can extend the immediate context for that sentence. For instance, the rule mentioned earlier that binds a parameter as a definite refer- ence acids to the irmnediate context an entry for the entity to which the parameter is bound. When the assumption in (1) is discharged, resulting in the interpretation in (2), an entry for j must be added to the immediate context. The entry will include the sort of j (jet) and the surface position of that phrase in the sentence (subject). The discourse context must also be updated af- ter each sentence has been processed. In the sim- plest case, tile update will be quite straightfor- ward, as illustrated in Figure 1: the current im- mediate context will become the new local con- text, while a subset of the information encoded in the immediate context will be also added to the intermediate context (the topmost element of the global-context stack). The immediate context will 4This will not be true when a "pop" of the global context has just occurred [13]. 77 Discourse context Immediate context J Local context J Intermediate context LCI.I~ LCI J DS. DS~ -,~LC H DS. . . DS~ Know e00e o I K8 I i-1 Figure 1: Updating the Discourse Context be cleared in preparation for the next sentence. For the moment, we shall assume that the knowl- edge base is static, although it will ultimately have to be reorganizable dynamically so as to reflect a language user's current perspective. In fact, the update function can be rather more complex. For example, if the current utterance is recognized to be the start of a subordinate dis- course segment, a new, empty element can be pushed onto the global-context stack after the lo- cal context has been merged into the previous top element. We shall discuss the discourse-context update function further elsewhere [24]. 5 A Simple Discourse The following simple discourse will provide our first illustration of the interpretation mechanism and, in particular, the treatment of reference and coreference: The jet failed. (3) Close the manifold. In the subsequent sections, we shall turn to more complex examples that provide further insight into the way in which pragmatic processes can interact with one another affecting syntactic and semantic decisions. The three semantic-interpretation rules given in Figure 2 are needed in the example. Recall that the interpretation process is driven by semantic- interpretation rules, which apply compositionaUy to phrases. Each such rule has three parts: an ap- plicability condition (AC), a set of selection f~nc- tions (SF), and a condflionai-interpretation func. tion (CIF). The applicability condition specifies the syntactic type of phrase to which the seman- tic interpretation rule applies; it is stated in terms of a predicate on trees. 5 The selection func- tions specify how to access the constituents of the phrase to which the rule is to he applied. Fi- nally, the conditional-interpretation function de- fines the conditional interpretation of the phrase as a function of the conditional interpretations of its constituents. A conditional-interpretation function will often depend separately on the sense and assumptions of a conditional interpretation I, for which we use the notations Is and IA, respec- tively. Figure 3 shows an annotated tree e represent- 5The meanings of the predicates on trees used in this paper should be clear from their names. eOur analysis trees are closer to the functional struc- 78 r (fail0). ¢) 1 (fai/(a), {bind(a, argl, device), restrict(argl, •,fl}) 2 (fail(a). {bind(a, argl, device)}> ¢ I failed 3 (b, {bind(b, def, jet)}) 4 the 5 qet, 0) iet Figure 3: Interpretation of "The jet failed" [iv-clause]: AC: intrans-verb-clause(T ) SF: pred(T) = V, argl(T) = A CIF: IT] = (~VIs, [VIA U [A]AU {restrict(argl, =, [A]s) }) [def-np]: AC: def-np(T) SF: argl(T) = N CIF: [~I = (z, [N]A U {bind(z, def,[N]s)}) [lex]: AC: lex-ltem(T) SF: wordstem(T) = W CIF: IT]= Iw Figure 2: Semantic-Interpretation Rules I ing the derivation of a complete interpretation of the first sentence in (3). Conditional interpreta- tions of constituents of the complete sentence are shown above the root nodes of the corresponding subtrees. Semantic-interpretation rule [lex] applies to lex- ical subtrees (Nodes 2 and 5 in Figure 3 ~) associ- tures of lexlcal-functional grammar [4] than to the usual surface constituent structure. The sample analyses have been extrsmely simplified for expesitory reasons; terminal nodes, in particular, appear in the trees simply as the corre- sponding word, but their actual representation, as required by interpretation rule [lex], has two branches: wordstem for the actual root form of the terminal, cat for its syntactic category. Finally, tree nodes relevant to the discussion are numbered for ease of reference. 7Node 4 is also lexical, but defudte determiners con- tribute only to the interpretation of their mother noun ating with each wordstem W conditional interpre- tations Iw according to the lexicon. The lexical entries relevant to the current discussion are: /jet = (jet, ¢) /fail = (fail(z), {bind(z, argl, device)}) In the conditional interpretation of a common noun, the sense is always a sort term. The assump- tion set may be empty, as it is for ~jet" above, but for a relational noun it will contain hind assump- tions for the relation's arguments, binding param- eters occurring in the sort term. The lexical entries for verbs and the structural rules that combine a verb with its subject and complements must refer, through assumptions, to the grammatical functions that provide the argu- ments of the predicates that represent the senses of verbs (roughly the govcrnable grammatical func- tions of lexical-functional grammar [4,27]). Since we are not defending any particular theory of grammar in this paper, we shall skirt a theoretical and terminological minefield by naming the gram- matical functions relevant to our purposes argi for i = 1,..., n, and calling them simply "argu- ments." Arguments are used as edge labels in our analyses, as well as in bind and restrict assump- tions, and their intended interpretation should be clear from the examples we are discussing. The encoding of selectional restrictions is illus- trated here in the conditional interpretation of the verb "fail," which is fail(z), under the assumption that z must be bound as first argument of the verb to something of the sort device. This inter- phrase, by rule [def-np], rather than being given a sepa- rate interpretation. 79 pretation effectively encodes the information that things that fail are devices, s Because the local tree rooted at Node 3 repre- sents a definite noun phrase, rule [def-np] applies to it in a straightforward fashion, yielding the con- ditional interpretation (b, (bind(b, def, jet)}) (4) That is, "the jet" is interpreted as b under the as- sumption that b can be bound in accordance with the constraints of definite reference to an entity of sort jet. As mentioned earlier, a pragmatic-discharge rule may be used whenever it is applicable to some conditional interpretation in context. In the cur- rent example, the rule for discharging the bind as- sumption is applicable to the conditional interpre- tation in (4), and it is actually used in the deriva- tion to determine a referent for the definite noun phrase. The process of resolving a definite reference is of course quite complex [5,11,28,29], and the rule that discharges assumptions to bind a parameter as a definite reference must reflect this complex- ity. For the moment, let us assume that there is only one entity of the correct sort available for def- inite reference (perhaps introduced in a preceding portion of the discourse): the jet identified as j. The pragmatic discharge rule can thus bind the parameter b to j, extend the immediate context accordingly, and delete the bind assumption from the list of assumptions in the current conditional interpretation. The resulting conditional interpre- tation of the string "the jet" is (j, ~b), shown in Figure 3 above Node 3 ~. Finally, consider the interpretation of the whole sentence. Rule [iv-clause] applies to the parse tree for the sentence, specifying that its sense is the sense of the predicate (pred) constituent, namely fail(a), and that its set of assumptions is the union of (i) the assumptions from its predicate constituent, (ii) the assumptions from its argu- ment (argl) constituent, and (iii) the new assump- tion restrict(argl, =, j), where j is the sense of the argument constituent. The restrict assumption, which arises from the sentence's syntactic form, applies to whatever parameter is to be bound as the first argument of the sense of the sentenc~ in STile conditional interpretation shown above Node 2 in the figure has a new parameter, Q, substituted for the vari- able z of the lexical entry because parameters introduced through bind assumptions in distinct appllcAtious of seman- tic interpretation-rules in a derivation must be themselves distinct. this case, a, as specified by the bind assumption inherited from the predicate constituent. The re- strict assumption further constrains the binding of this parameter by requiring that it be equated with the entity j. The interpretation process is completed after the two remaining assumptions are discharged, as indicated at the top of Figure 3. 9 They can be discharged successfully in parallel: binding a to j is legitimate because j is a jet, and jet is a subsort of device. Before the next sentence is processed the discourse context needs to be updated, as de- scribed earlier. The second sentence of our example is "Close the manifold"; we shall be concerned primarily with the way in which the reference resolution problem is handled. The conditional interpreta- tion for the definite noun phrase "the manifold" is (c, {bind(c, def, manifold)}) (5) Discharging the bind assumption here requires the use not only of world knowledge -- namely, that each jet is attached to one and only one manifold -- but also of knowledge of the discourse history-- namely that there is a single salient jet in context, the one identifed as j. The latter information can be derived from the discourse context, while the former must be encoded in the knowledge base. This information is sufficient to resolve the ref- erence in the sentence under consideration: "the manifold" refers to the manifold that is attached to j. Hence the interpretation we derive from (5) is (,.,+,), CO) where m is the unique manifold attached to jet j. For use in constraining subsequent reference, the discourse context must be updated with the infor- mation that m has the restricted sort : manifold I Az.attached-~o(z,j), where sip is the subsort of s whose elements satisfy property P. 9In the Candide system as it currently exists, a bind assumption encoding a selectlonal restriction and a restrict assumption encoding the filler of an argument must be dis- charged as soon as the latter has been is introduced; oth- erwise an erroneous interpretation might he derived if the restrict asmarnption is mistakenly applied at a higher clause node. A better scheme would encode sufficient inforn~tion in these restrict assumptions to ensure that they could ap- ply only to the appropriate clause. 80 6 Quantifier Scope We shall now turn to the kind of interactions in pragmatic processing that challenge compositional systems. In this section we shall discuss an exam- ple of quantifier scope ambiguity; following that, we shall give an example of our analysis of donkey sentences, involving interactions between quanti- fier scoping and reference resolution. The following sentence illustrates the quantifier scoping problem in its simplest form: Every driver controls a jet. (7) This sentence might be given either a wide-scope existential (3V) interpretation, in which all the drivers control the same jet, or a narrow-scope ex- istential (V3) interpretation, in which each driver controls its own, possibly different, jet. [tv-clause]: AC: trans-verb-clause(T) SF: pred(T) = V, argl(T) = A1, arg2 = A2 CIF: [T] = (IV]s, [vh u [A~h u [AZ]AU {restrlct(argl,--, [A1]s), restrlct(arg2, =, [A2]s)}) [gen-quant]: AC: gen-quant(T) SF: pred(T) = Q, argl(T) = N ClF: IT] = (x, [N]Au{bind(z, [Q]s, [NIs)}) [indef-np]: AC: indef-np(T) SF: argl(T) = N CIF: [T] = (z, [N]AU{bind(z, indef, IN]s)}) Figure 4: Semantic-Interpretation Rules II Interpreting (7) requires additional rules of se- mantic interpretation, shown in Figure 4, and the lexical entry /control = ( co,,~rols( z, y ), {bind(z, argl, device), bind(y, arg2, device)}) Derivations of the 3V and the V'~ interpretations are shown in Figures 5 and 6, respectively. In both derivations, the general noun phrase "every driver" is interpreted at Node 2 by rule [gen-quant] and the indefinite noun phrase "a jet" is interpreted at Node 1 by rule [indef-np]. How- ever, the two derivations differ as to where the indefinite-reference assumption is discharged. In Figure 5 the assumption is discharged immediately after its introduction. The resulting sense is a new entity j of sort jet. The same 3V reading could also be derived by allowing the indefinite-reference assumption to percolate up to Node 3, but then discharging it before the generalized quantifier as- sumption. In either case, the immediate context is updated at the time of the discharge with an entry for the new entity j. Somewhat more interesting is the derivation of the ~ reading, shown in Figure 6. The indefinite- reference assumption is allowed to percolate to Node 3, where the generalized-quantifier assump- tion is discharged. This discharge applies a quan- tifier to its scope, but it also selects some sub- set of the outstanding indefinite-reference assump- tions in the current conditional interpretation and discharges them, by existential quantification of the respective parameters, within the scope of the generalized quantifier. In our example, the rule converts the conditional interpretation ( con~rots( a, b ), {bind(a,V, driver), bind(b, indef, je0} ) into the completed interpretation {Ya : driver 3b :jet controls(a, b ), qb) • 7 A Donkey Sentence We can now discuss the more complicated inter- actions between assumptions occurring in donkey sentences. Our example will be the sentence: Every driver controlling a jet closes it. (8) Clearly, this sentence has an interpretation in which, for every driver controlling a jet, the driver closes the jet. However, it is difficult to see how this interpretation can be derived compositionally. The weU-recognized problem is that, in the in- tended reading, the indefinite noun phrase % jet" has narrower scope than the determiner "every," forcing its interpretation to be part of the sort term translating the nominal "driver controlling a jet." But this means that the interpretation of the pronoun "it" will be outside the scope of the indefinite "a jet." Our solution to the problem of interpreting don- key sentences involves two new mechanisms: cap- ture rules that allow the quantifier in a general 81 (mnuv/s(x, y), {bind(x, argl, d~/ce), bind(y, arg2, d~/ce)}) controls (V~r.arriver ~mrols(~ D. ~) (controls(a, j), {bind(a, V, driver)}) ¢ 3 2 (a, {bind(a, V, driver)l) I (b, {bind(b, indef, jet)}) pr~argl~ pred/argl~ every driver a jet Figure 5: ~ Interpretation (mntro/s(x, y), {bind(x, argl, dev/ce), bind(y, arg2. d~/m)}) controls (Va~river 3b:jet controls(a, b), ~) (controls(a, b ), {bind(,*, V, driver), bind(b, indof, jet)}) ¢ 3 1 (b, {bind(b, indef, jet)}) a jet 2 (a, {bind(a, V, driver)}) or./o\ every driver Figure 6: V'~ Interpretation noun phrase to discharge in a particular way bind assumptions derived from singular noun phrases occurring in the general noun phrase, and a pro- noun resolution rule that discharges a pronoun- introduced bind assumption by replacing the as- sumption's parameter with the parameter bound by the assumption for a possible antecedent of the pronoun. Figure 7 shows a simplified derivation of an interpretation of sentence (8), with some of the less interesting assumptions discharged immedi- ately after their introduction rather than being listed explicitly. Before discussing the main points of this example we need to explain our somewhat nonstandard representation of [reduced] relative clauses, as in the compound nominal "driver con- trolling a jet" (Node 2). A relative clause is rep- resented as a main clause but has one of its argu- ment positions filled by a nominal (the head noun modified by tile relative clause) instead of a noun phrase. The discharge rule discussed in Section 5 that combines a verb argument with its filler then has two versions: one in which the filler sense is an entity, already described, and one in which the filler sense is a sort. In the latter case, the rule produces an interpretation whose sense is the filler sort restricted by the sense of the clause. In the foregoing example, the sort-filler discharge rule is applied to the interpretation ( controis( z, b ), • {bind(z, argl, device), restrict(argl, --, driver), bind(b, indef', jet)}) 82 4'" (Vb;jet 'Ca : (dr/verl Zx.conwols(x, b)) closes(a, b), ~b) ÷ 4"" CCa : (driver I kx.conwols(x, b )) closes(a, b ), {bind(b, V, jet)I) 4' (Va : (driver I Zx.controls(x, b )) closes(a, c), {bind(b, V, jet), bind(c, pron, neuter)}) ¢ 4 (closes(a, c), {bind(a, V, driver l~x.controlxOc, b), {bind(b, indef, jet)}), bind(c, pron. neuter)}) closes argl (c. {bind(c. pron. neuter)}) it 3 (a, {bind(a, V, driver I ~.controls(x, b), {bind(b, Indef, jet)})}) every 2 (driver I ~.x.contro ls(x, b ), {bind(b, indef, jet)I) controls driver 1 (b, {bind(b, indef, jet)}) a jet Figure 7: Interpretation of a Donkey Sentence to produce the restricted sort ( driveflAx.controls( z, b ), {bind(b, indef, jet)}) After these preliminaries, we can go on to the main point of the example. The first observation to make is that the sentence has an alternative (albeit unlikely) interpretation in which "a jet" is taken to refer to a specific jet that every driver controls. This interpretation would be derived by discharging the corresponding indefinite-reference assumption at Nodes 1 or 2 in the derivation. 1° We shall assume that this is not done, and that the indefinite-reference assumption is therefore avail- able at Node 3. So far bind assumptions have been given as triples of a parameter, a binding criterion (derived from a determiner), and a sort restriction for the parameter. In fact, a fourth component of depen- 10 A third interpretation is also possible, in which "a jet" is interpreted as a narrow-scope (no nreferential) existential, and "it" is interpreted as having an extrasentential refer- ent. Limitations in Candide's handling of nonreferential indefinites preclude this reading, but a somew|mt different rule system will generate all three readings correctly [23]. dencies is in general required, a set of other as- sumptions that the given assumption may depend on. 11 An assumption a (the dependent assump- tion) depends on another assumption ~ (the inde- pendent assumption) whenever the parameter for ]~ occurs in the sort constraint of ~. For the lan- guage fragment under discussion, a would be the bind assumption for a complex noun phrase and the bind assumption for a noun phrase within a prepositional phrase or relative clause in the com- plex noun phrase. For correct binding of quanti- fied parameters, semantic interpretation and dis- charge rules must maintain the invariant that as- sumptions on which a given assumption depends can occur only in its set of dependencies. Con- sequently, whenever a dependent assumption c~ is introduced any other assumption on which it depends must be moved into <~'s dependencies, thereby becoming inaccessible to discharge rules. If a is later discharged, the assumptions in its set of dependencies again become accessible to dis- 11In the examples so far this set has been empty and therefore omitted for the sake of clarity. 83 charge rules. Semantic interpretation must be modified to fit this analysis. For instance, rule [gen-quant], given earlier, should be instead [gen-quant']: AC: gen-quant(T) SF: pred(T) -- Q, argl(T} -- N CIF: [T] = (z, {bind(z, [Q]s, [N]s, IN]A)}) In Figure 7, this rule is applied at Node 3. Capture may occur whenever a generalized- quantifier assumption with a nonempty set of de- pendencies D is discharged. Any indefinite as- sumption in D may be captured by turning it into a universal-quantification assumption and putting it into the set of assumptions for the new con- ditional interpretation. In our example, the in- definite assumption for "a jet" is captured in the discharge of the universal assumption for "every driver...", from Node 4 to Node 4' in the deriva- tion. The resulting assumption is now Universal. If this assumption were discharged immediately, there would be no way of discharging the pronoun assumption as an intrasentential anaphoric refer- ence. Instead the pronoun resolution rule is ap- plied to discharge the pronoun assumption, caus- ing identification of the pronoun parameter c with the jet parameter b. The resulting conditional in- terpretation is 4". Finally, the remaining assump- tion can be discharged by quantification leading to the complete interpretation at Node 4 ~a. The example shows how assumptions allow in- teractions between reference and quantification to be left unresolved until all the necessary informa- tion becomes available. Early discharge of the assumption for "a driver" blocks the desired in- terpretation for the pronoun "it"; capture makes available the attributive use of "a driver ~" at an ap- propriate point for its identification with the direct object of "close." 8 Related Research Strictly compositional approaches to semantic in- terpretation, such as Montague grammar [19], have so far proved inadequate for dealing with interactions between meaning and context; rea- sons for this are noted in Section 1. Our ap- proach can be thought of as a generalization of the compositional mechanism of Cooper storage [6], or of its computational analogue developed by Woods [30]. Alternative approaches that attempt to address these interactions include discourse- representation theory (DRT) [14,18] and Barwise's partial-valuation approach [2]. In DRT, the interpretation of a sentence is derived in a compositional manner from an intermediate representation called a discourse- representation structure (DRS). Itowever, the rules that have been developed for constructing DRSs are not themselves compositional. Accord- ing to the DRS-construction rules presented by Kamp [18], the DRS for a phrase is found only as a by-product of finding the DRS for the embed- ding discourse. In particular, DRS-construction rules apply only after the relative scope of noun phrases and anaphoric bindings have been deter- mined. It is conceivable that our notion of con- ditional interpretation might be reexpressible in DRT terms, leading to a compositional system for DRS construction. Barwise [2] uses the notion of partial valuation, that is, partial assignments of values to variables, to analyze the sorts of interactions exemplified by the donkey sentences. Similar comments apply to Webber's work [29]. In addition, none of the afore- mentioned accounts has been concerned with as wide a range of phenomena as is currently haw died in Candide. 12 One of the motivations for our work has been to see how Barwise's direct- interpretation approach could be turned into a two-stage one in which phrases are first "com- piled" into conditional interpretations, which are then "executed" by applying pragmatic-discharge rules. Finally, several other computational systems de- veloped recently are concerned with interactions between context and meaning, especially Pundit [8,21] and Tacitus [17,16]. Both these systems have emphasized solutions to such difficult prag- matic problems as reference resolution. In particu- lar, the Pundit project has made notable progress on the question of resolving missing arguments, while the Tacitus group has done the same for questions involving the determination of implicit relations. In Candide, solutions to such prag- matic problems should be encoded in the proce- dures that discharge assumptions; in future ver- sions of the system the discharge procedures might be improved applying some of the techniques de- veloped in this other work. What neither Pundit nor Tacitus has been concerned with is the ques- tion of how to build interpretations composition- 12To date, we have included capabilitea for process- ing reference and coreference (definlte and indefinite noun phraaea, pronouns, poeatesaivea, and proper nouns), quan- tifier scope, compound nominals, prepmltional-phrase at- tachment, and certain types of underapecified relations (e.g., maln-verb "have"). We shall report on these mecha, nisms elsewhere [24]. 84 ally. Both systems first build partial interpreta- tions of sentences, and then attempt to solve a collection of associated pragmatic problems. Pun- dit does the latter in an overly constrained way, with the result that it cannot handle systemati- cally the sort of interactions exemplified by the donkey sentences. Tacitus, on the other hand, casts all the pragmatic problems as theorems to be proved; the result is an underconstrained control strategy. We believe that the generally composi- tional approach developed in Candide enables us to avoid both these extremes. 9 Further Work We have developed a mechanism of semantic and pragmatic interpretation that relaxes the con- straints of compositional semantics just-enough to allow pragmatic information to play its neces- sary role in the derivation of sentence interpreta- tions. Central to the mechanism are conditional interpetations, which allow us to separate con- straints on interpretation that depend only on syn- tactic structure, represented by the sense compo- nent of the conditional interpretation, from those that depend on pragmatic choices, represented by the assumption component. The interpreta- tion process is carried out by a combination of semantic-interpretation rules, which build condi- tional interpretations of phrases on the basis of lexical and syntactic information, and pragmatic- discharge rules, which satisfy assumptions on the basis of discourse and domain information. While the system we have implemented deals with a va- riety of semantic and pragmatic phenomena, of which only a few were discussed in this paper, it can only be seen as a first limited instantiation of a system architecure that requires much further work. We shall mention now a few of the directions that might be pursued in developing the architec- ture further. At the most theoretical level, it is interesting to note the formal similarity of our interpretation rules to rules in "deductive ~ models of program- ruing language semantics [25]. It is also interest- ing to consider the connection between conditional interpretations and the relational theory of mean- ing from situation semantics [3]. These two simi- larities might be fruitful in developing a semantic justification for our formal interpretation rules in terms of constraints on interpretation relations. The applicability of discharge rules depends in many cases on the compatibility of expected and supplied sorts for relation arguments. In gen- eral, these sorts may be parameterized by assump- tion parameters, and some semantic interpretation problems not considered here suggest that higher- order parameterized types, instead of first-order sorts, may be needed. A suitable notion of type subsumption for such higher-order parameterized types [15] would be useful. More generally, the whole architecture would benefit from a semanti- cally grounded treatment of parameters and pa- rameterized objects. Other pragmatic processes associated with dis- charge rules, such as those for reference resolution, also must be able to reason with parameterized objects--for example in checking the uniqueness of a dependent object relative to arbitrary parame- ter assignments. Ultimately, the proper treatment of singular noun phrases in context will require a closer connection between assumptions and [pa- rameterized] fragments of the discourse context. References [1] A. V. Aho, R. Sethi, and J. D. Ullman. Compil- ers: Principles, Techniques and Tools. Addison- Wesley, Reading, Massachusetts, 1985. [2] J. Barwise. Noun phrases, generalized quanti- tiers and anaphora. In P. G~xdenfors, editor, Generalized Quantifiers: Linguistic and Logical Approaches, pages 1-29, D. Reidel, Dordrecht, Netherlands, 1987. [3] J. Barwise and J. Perry. Situations and Attitudes. MIT Press, Cambridge, Massachusetts, 1983. [4] J. Bresnan and R. Kaplan. Lexical-functional .grammar: a formal system for grammatical repre- sentation. In J. Bresnan, editor, The Mental Rep- resentation of Grammatical R~lations, pages 173- 281, MIT Press, Cambridge, Massachusetts, 1982. [5] D. Caxter. Interpreting Anaphors in Natural Lan. guage Tezts. Ellis Horwood, Chichester, England, 1987. [6] R. Cooper. Quantification and Syntactic Theory. D. Reidel, Dordrecht, Netherlands, 1983. [7] S. Crain and M. Steedman. On not being led up the garden path: the use of context by the psycho- logical syntax processor. In D. Dowty, L. Kart- tunen, and A. Zwicky, editors, Natural Language Parsing: Psychological, Computational, and The- oretical Perspectives, pages 320-358, Cambridge University Press, Cambridge, England, 1985. [8] D. A. Dahl, M. S. Palmer, and R. J. Passonneau. Nominalizations in Pundit. In Proceedings of the 85 ~4th Annual Meeting of the Association for Com- putational Linguistics, pages 131-139, Stanford, California, 1987. [9] P. Geach. Reference and Generality. Cornell Uni- versity Press, Ithaca, New York, 1962. [10] M. P. Georgeff and A. L. Lansky. Procedu- ral knowledge. Proceedings of the IEEE, Special Issue on Knowledge Representation, 1383-1398, 1986. [11] B. J. Grosz. The Representation and Use of Fo- cus in Dialogue Understanding. Technical Re- port 151, SRI International, Menlo Park, Ca., 1977. [12] B. J. Grosz, A. K. Joshi, and S. Weiustein. Pro- viding a unified account of definite noun phrases in discourse. In Proveedings o] the 21st An- nual Meeting of the Association for Computa. tional Linguistics, pages 44-50, Cambridge, Mas- sachusetts, 1983. [13] B. J. Grosz and C. L. Sidner. Attention, inten- tions, and the structure of discourse. Computa- tional Linguistics, 12(3):175-204, 1986. [14] F. Guenthner, H. Lehmann, and W. Schonfeld. A theory for the representation of knowledge. IBM Journal of Research and Development, 30(1):39- 56, January 1986. [15] R. Harper, F. Honsel], and G. Plotkin. A frame- work for defining logics. In Proceedings of the Sec- ond Symposium on Logic in Computer Science, Cornell University, IEEE, Ithaca, New York, 1987. [16] J. R. Hobbs. Interpretation az abduction. In Pro- ceedings of the £6th Annual Meeting of the As- sociation for Computational Linguistics, Buffalo, New York, 1988. [17] J. R. Hobbs. Overview of the TACITUS project. Computational Linguistics, 12(3), 1986. [18] H. Kamp. A theory of truth and seman- tic interpretation. In J. A. G. Groenendijk, T. M. V. Janssen, and M. B. J. Stokhoi~ edi- tors, Formal Methods in the Study o] Language, pages 277-322, Mathematisch Centrum, Amster- dam, Netherlands, 1981. [19] R. Montague. The proper treatment of quantifi- cation in ordinary English. In R. H. Thomason, editor, Formal Philosphy, Yale University Press, 1973. [20] R. C. Moore. Problems in logical form. In Pro- ceedings o] the 19th Meeting o] the Association for Computational Linguistics, pages 117-124, Stan- ford, California, 1981. [21] M. S. Palmer, D. A. Dahl, R. J. Schiffman, L. Hirschman, M. Linebarger, and J. Dowding. Recovering implicit information. In Proceedings of the ~.~th Annual Meeting of the Association , [22] for Computational Linguistics, pages 10-19, New York, 1986. F. C. Pereira- Logic for Natural Language Anal- ysis. Technical Report 275, SRI International, Menlo Park, California, 1983. [23] F.C. Pereira- Towards a deductive theory of sen- tence interpretation. Unpublished manuscript. [24] F.C. Pereira and M. E. Pollack. A compositional, declarative system for semaattic and pragmatic in- terpretation. In preparation. [25] G. D. Plotkin. A Structural Approach to Oper- ational Semantics. Lecture notes DAIMI FN-19, Aarhus University, Aarhus, Denmark, September 1981. [26] L. K. Schubert and F. 3. Pelletier. From English to logic: context-free computation of 'conven- tional' logical translation. Computational Lin- guistics, 8(1):26-44, 1982. [27] P. Sells. Lectures on Contemporary Syntactic Theories. Volume 3 of CSLI Lecture Not~, Cen- ter for the Study of Language and Information, Stanford University, Stanford, California, 1985. Distributed by University of Chicago Press. [28] C. L. Sidner. Focusing in the comprehension of definite anaphora- In Computational Mod- els of Discourse, MIT Press, Cambridge, Mas- sachusetts, 1983. [29] B. L. Webber. So what can we talk about now? In Computational Models of Discourse, MIT Press, Cambrige, Ms., 1983. [30] W. A. Woods. Semantics and quantification in natural language question answering. In M. Yovits, editor, Advances in Computers, Vol. 17, pages 2-64, Academic Press, New York, New York, 1978. 86
1988
10